{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/databricks"},"x-facet":{"type":"skill","slug":"databricks","display":"Databricks","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c323c93-132"},"title":"DATA Plateform & DevOps Engineer","description":"<p>We are looking for a Databricks Platform Administrator / Data Platform Engineer responsible for supporting the administration, governance, and operational management of the Databricks platform. The role focuses on ensuring platform stability, security, cost efficiency, and enabling data teams through standardized processes and automation.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support the administration of Databricks environments (DEV, TEST, PROD)</li>\n<li>Manage user access, roles, and permissions</li>\n<li>Support identity provisioning via Azure AD / AWS IAM</li>\n<li>Collaborate with cloud teams on connectivity, storage access, and security topics</li>\n</ul>\n<ul>\n<li>Manage access to notebooks, jobs, clusters, and SQL Warehouses</li>\n<li>Support cluster configuration and usage policies</li>\n<li>Troubleshoot platform-related issues and coordinate with support teams</li>\n<li>Maintain operational documentation and internal guidelines</li>\n</ul>\n<ul>\n<li>Support Unity Catalog governance (catalogs, schemas, tables)</li>\n<li>Manage access controls and data ownership</li>\n<li>Ensure platform security standards and audit readiness</li>\n<li>Contribute to compliance and governance processes</li>\n</ul>\n<ul>\n<li>Support CI/CD processes for Databricks deployments</li>\n<li>Participate in automation initiatives using Git-based workflows</li>\n<li>Contribute to standard templates and deployment practices</li>\n<li>Support lifecycle management across environments</li>\n</ul>\n<ul>\n<li>Monitor platform usage and identify optimization opportunities</li>\n<li>Support cost tracking and reporting activities</li>\n<li>Contribute to tagging and governance practices for cost attribution</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Master’s degree in Computer Science, Software Engineering, Data Engineering, or a related technical field</li>\n<li>Minimum 3 years of experience in Databricks platform administration, preferably in an Azure environment</li>\n<li>Good understanding of CI/CD concepts and Git-based workflows</li>\n<li>Experience working in cloud-based data platforms or DevOps environments is a plus</li>\n<li>Experience with Databricks administration (workspace management, access management, clusters, Unity Catalog is a plus)</li>\n<li>Knowledge of Azure cloud services (preferred) or other cloud providers</li>\n<li>Familiarity with Infrastructure as Code concepts (Terraform is a plus)</li>\n<li>Scripting knowledge (Python or Bash is a plus)</li>\n<li>Databricks certifications (ex: Databricks Certified Data Engineer, Databricks Platform Administrator)</li>\n<li>Cloud certifications such as Azure Data Engineer, AWS Solutions Architect, or equivalent</li>\n<li>Strong English communication skills (written and spoken)</li>\n<li>Good analytical and troubleshooting skills</li>\n<li>Ability to work in a collaborative and international environment</li>\n<li>Proactive mindset and structured way of working</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>\n<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>\n<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>\n<li>A team where your input and technical decisions truly matter.</li>\n<li>A competitive package and benefits.</li>\n</ul>\n<p>Join us and make a direct impact on shaping the future of Data, AI, and Mobility.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c323c93-132","directApply":true,"hiringOrganization":{"@type":"Organization","name":"AVL","sameAs":"https://jobs.avl.com","logo":"https://logos.yubhub.co/jobs.avl.com.png"},"x-apply-url":"https://jobs.avl.com/job/Sala-Al-Jadida-DATA-Plateform-&-DevOps-Enginner/1383237733/","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive package and benefits","x-skills-required":["Databricks","Azure","Git","CI/CD","Infrastructure as Code","Terraform","Scripting","Python","Bash","Cloud certifications"],"x-skills-preferred":["Unity Catalog","Azure Data Engineer","AWS Solutions Architect"],"datePosted":"2026-04-22T17:34:47.209Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sala Al Jadida"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Databricks, Azure, Git, CI/CD, Infrastructure as Code, Terraform, Scripting, Python, Bash, Cloud certifications, Unity Catalog, Azure Data Engineer, AWS Solutions Architect"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bed89d15-812"},"title":"Power BI Developer","description":"<p>Design, develop, and maintain Power BI dashboards and reports to support business decision-making.</p>\n<p>Translate business requirements into data models, metrics, and visualizations.</p>\n<p>Build and optimize Power BI data models using DAX, Power Query, and best practices.</p>\n<p>Ensure performance optimization and scalability of Power BI datasets and reports.</p>\n<p>Work with data platforms (e.g., Databricks, Microsoft Fabric, or similar) to prepare and transform data for analytics.</p>\n<p>Collaborate with data engineers, analysts, and business stakeholders to deliver end-to-end analytics solutions.</p>\n<p>Implement data quality checks and documentation for datasets and reporting solutions.</p>\n<p>As a Power BI Developer at MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will work with a team of experts to deliver high-quality analytics solutions to our customers.</p>\n<p>The ideal candidate will have strong experience with Power BI development, proficiency in SQL, and hands-on experience with modern data platforms such as Databricks or Microsoft Fabric. They will also have practical knowledge of Python or PySpark for data processing and experience building end-to-end reporting solutions from raw data to dashboards.</p>\n<p>We value the authenticity that comes from bringing your individual strengths into the team. Diversity plays a key role in our culture, and it brings different visions &amp; flavors into the mix.</p>\n<p>We all share a strong team spirit. Every win, big or small, belongs to all of us.</p>\n<p>We always welcome curiosity, creativity, and unconventional thinking patterns.</p>\n<p>We recognize the importance of healthy, tight-knit communities and sustainable environmental changes, and we strive to enact positive change in any form within our reach.</p>\n<p>We’re here to co-create your ideal career growth plan tailored to your professional aspirations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bed89d15-812","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20076","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Power BI","DAX","Power Query","SQL","Python","PySpark","Databricks","Microsoft Fabric"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:26:30.097Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"Power BI, DAX, Power Query, SQL, Python, PySpark, Databricks, Microsoft Fabric"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8610ea3d-93b"},"title":"Cloud Platform Engineer","description":"<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>\n<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>\n<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>\n<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>\n<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>\n<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>\n<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>\n<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>\n<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>\n<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>\n<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>\n<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>\n<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>\n<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>\n<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>\n<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>\n<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>\n<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8610ea3d-93b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955139979","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["AWS","Python","C#","DevOps","Infrastructure as Code","Cloud Security","SQL","Database Concepts","Networking"],"x-skills-preferred":["Airflow","Spark","Kafka","Databricks","Snowflake","Flink","Capital Markets"],"datePosted":"2026-04-18T22:12:50.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_79072a0c-85b"},"title":"Behavioral Data Science Intern - Agentic AI & People Analytics","description":"<p>Where do you want to go? What do you want to achieve? How would you like to get involved? At Bayer, we bring together multi-talents and specialists to feed the world, slow climate change, and create healthier, more sustainable lives for all.</p>\n<p>This is the opportunity to start your career with a global leader committed to HealthForAll and HungerForNone. Bring your ideas, skills, and passion with you. Your career starts here.</p>\n<p>Are you passionate about AI, data science, and behavioural insights? Join our Talent Impact team and apply your technical skills to projects that combine machine learning, generative AI, and behavioural science to improve how people work and develop. This internship offers hands-on experience in a supportive environment where you’ll learn, contribute, and make an impact.</p>\n<p>Your tasks and educational objectives:</p>\n<ul>\n<li>Work with HR and behavioural data to create structured, analysis-ready datasets for people analytics.</li>\n<li>Support development and testing agentic AI workflows (including LLM-based tools) that support HR decision-making.</li>\n<li>Help to build and evaluate machine learning models to explore workforce trends, learning behaviours, and engagement.</li>\n<li>Together with team members, create dashboards and visualisations that turn complex data into actionable insights for HR and business partners.</li>\n<li>Apply modern data workflows using Databricks, GitHub Spaces, and cloud platforms (Azure or AWS).</li>\n<li>Collaborate with experienced mentors and participate in small experiments to measure impact and share findings.</li>\n</ul>\n<p>Who you are:</p>\n<ul>\n<li>Python programming skills for data processing, modelling, and AI workflows.</li>\n<li>Hands-on experience with Generative AI (GenAI) or LLM-based systems (academic projects or internships count).</li>\n<li>Familiarity with cloud platforms (Azure or AWS), with a focus on Databricks and GitHub Spaces for collaborative development.</li>\n<li>Solid foundation in data science and machine learning.</li>\n<li>Strong interest in behavioural science, people analytics, and HR.</li>\n<li>Currently enrolled in a Master’s or advanced Bachelor’s program in data science, computer science, cognitive science, psychology, behavioural economics, neuroscience, or a related field.</li>\n<li>Curiosity, willingness to learn, and ability to work on-site in Leverkusen.</li>\n<li>Fluent English, written and spoken.</li>\n</ul>\n<p>What we offer:</p>\n<p>Our benefits package is flexible, appreciative, and tailored to your lifestyle, because what matters to you, matters to us!</p>\n<ul>\n<li>For a full-time position, you can expect an attractive salary of € 2,214 gross per month.</li>\n<li>Depending on the nature of your job, flexible work arrangements can be made in alignment with your manager.</li>\n<li>We support your growth through access to professional development and learning opportunities, such as LinkedIn Learning and our language learning platform Education First.</li>\n<li>As one of our perks, our Corporate Benefits program grants you access to sales discounts from more than 150 brands.</li>\n<li>We embrace diversity by providing an inclusive work environment in which you are welcomed, supported, and encouraged to bring your whole self to work.</li>\n</ul>\n<p>Ever feel burnt out by bureaucracy? Us too. That’s why we’re changing the way we work, for higher productivity, faster innovation, and better results. We call it Dynamic Shared Ownership (DSO). Learn more about what DSO will mean for you in your new role here https://www.bayer.com/en/strategy/strategy</p>\n<p>Our Mission &amp; Strategy:</p>\n<p>Through Dynamic Shared Ownership, we’re putting an end to the hierarchical model and putting more power in the hands of the innovators and creators at Bayer. Ready to join us? Apply now and start your 6-month learning journey in Leverkusen!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_79072a0c-85b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949975182354","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Python","Generative AI","LLM-based systems","Cloud platforms (Azure or AWS)","Databricks","GitHub Spaces","Data science","Machine learning"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:10:44.663Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Leverkusen"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"Python, Generative AI, LLM-based systems, Cloud platforms (Azure or AWS), Databricks, GitHub Spaces, Data science, Machine learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04c1ff49-2d1"},"title":"Data Platform Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>[Preferred] Databricks Certification but not essential</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04c1ff49-2d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8396801002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","technical project delivery","documentation and white-boarding skills"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:58:52.546Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_43334479-97e"},"title":"Sr Analytics Engineer - GTM Strategy and Operations","description":"<p>As a Senior Analytics Engineer, you will be a critical partner to the Global GTM Strategy &amp; Operations teams, providing the data, AI-driven insights, and infrastructure needed to drive efficiency and effectiveness across the organization.</p>\n<p>You will design, build, and maintain scalable data models, curated reporting tables, forecasts, and dashboards that support everyone from senior executives to individual contributors, empowering them to make informed decisions and spend more time driving customer outcomes.</p>\n<p>Working closely with cross-functional stakeholders,including Sales, Finance, Marketing, and other data teams,you will tackle complex data challenges by leveraging structured data, building AI-powered querying assistants, and using tools like Databricks Genie to improve data accessibility, streamline insights, and deliver actionable, reliable solutions across the business.</p>\n<p>You will also play a key role in advancing our newly created AI initiatives and semantic data curation efforts, helping to establish a strong foundation for advanced analytics, automation, and scalable business intelligence.</p>\n<p>The Impact You Will Have:</p>\n<ul>\n<li>Build: You will design and develop analytic tools, including a semantic layer for AI use cases, scalable data models, curated tables, and insightful analyses that empower thousands of field employees and leaders worldwide.</li>\n</ul>\n<ul>\n<li>Architect: You will both manage the requirements gathering and lead execution of strategic analytic projects.</li>\n</ul>\n<ul>\n<li>Scale: You will build and manage relationships with stakeholders across the company but primarily with the GTM strategy and operations team</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>You have 4+ years of experience working as an Analyst / Data Engineer / Analytics Engineer with B2B sales, marketing, or finance data (GTM experience highly preferred).</li>\n</ul>\n<ul>\n<li>You are data-savvy with 3+ years of SQL and 2+ years of Python experience. Familiarity with data ecosystems and BI tools (e.g., Databricks, PowerBI) is required.</li>\n</ul>\n<ul>\n<li>You have built for scale. You have experience building scalable and productionizable data models with best practices in mind.</li>\n</ul>\n<ul>\n<li>You integrate AI into your daily workflow. You have hands-on experience using large language model tools (such as Claude or similar) to accelerate analytics work , from drafting and debugging code to synthesizing requirements and generating documentation.</li>\n</ul>\n<ul>\n<li>You&#39;re comfortable evaluating AI-generated outputs critically and iterating quickly.</li>\n</ul>\n<ul>\n<li>You are passionate about applying AI to transform GTM teams. You bring experience in delivering AI-driven solutions and have the ability to design innovative use cases as well as structure data models and tables that are optimized for AI readiness.</li>\n</ul>\n<ul>\n<li>You excel in partnering with the business, understanding the impact of your work on GTM, and creating innovative solutions.</li>\n</ul>\n<ul>\n<li>You have a track record of cross-functional collaboration and strong stakeholder relationships.</li>\n</ul>\n<ul>\n<li>You excel in a collaborative environment. You translate team member needs into clear tasks and deliverables for contributors.</li>\n</ul>\n<ul>\n<li>You work through dependencies, bottlenecks, and tradeoffs with ease.</li>\n</ul>\n<ul>\n<li>You have a service-oriented mindset.</li>\n</ul>\n<ul>\n<li>You are curious, creative, and kind.</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $133,000-$182,950 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_43334479-97e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8479036002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$133,000-$182,950 USD","x-skills-required":["SQL","Python","Databricks","PowerBI","Data Engineering","Analytics Engineering","AI","Machine Learning"],"x-skills-preferred":["Large Language Model Tools","Claude","Semantic Data Curation","Advanced Analytics","Automation","Scalable Business Intelligence"],"datePosted":"2026-04-18T15:58:18.439Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Databricks, PowerBI, Data Engineering, Analytics Engineering, AI, Machine Learning, Large Language Model Tools, Claude, Semantic Data Curation, Advanced Analytics, Automation, Scalable Business Intelligence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":133000,"maxValue":182950,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a38ec886-62e"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>Mission</p>\n<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications.</p>\n<p>We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team.</p>\n<p>This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>\n</ul>\n<ul>\n<li>Own production rollouts of consumer and internally facing GenAI applications</li>\n</ul>\n<ul>\n<li>Serve as a trusted technical advisor to customers across a variety of domains</li>\n</ul>\n<ul>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>\n</ul>\n<ul>\n<li>Minimum of 5+ years of relevant experience as a Data Scientist preferably working in a consulting role</li>\n</ul>\n<ul>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>\n</ul>\n<ul>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>\n</ul>\n<ul>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>\n</ul>\n<ul>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>\n</ul>\n<ul>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>\n</ul>\n<ul>\n<li>Passion for collaboration, life-long learning, and driving business value through AI</li>\n</ul>\n<ul>\n<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a38ec886-62e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8099751002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform","Mosaic AI research"],"datePosted":"2026-04-18T15:58:10.707Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bb1344c-662"},"title":"Sr. Solutions Engineer, Retail - CPG","description":"<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>\n<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and direct the technical strategy for our largest and most important accounts</li>\n<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>\n<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>\n<li>Provide technical guidance and support to customers</li>\n<li>Work with customers to identify and address technical issues</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>\n<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>\n<li>Strong background in value selling, technical account management, and technical leadership</li>\n<li>Solid understanding of big data, data science, and cloud technologies</li>\n<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Production programming experience in Python, R, Scala, or Java</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bb1344c-662","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7507778002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data","data science","cloud technologies","Hadoop","NoSQL","MPP","OLTP","OLAP","Python","R","Scala","Java"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:57:56.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_477d343e-e37"},"title":"Customer Success Architect","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>\n<p>About the Customer Success Team:</p>\n<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>\n<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>\n<p>About the Role:</p>\n<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>\n<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>\n<p>Responsibilities:</p>\n<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>\n<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>\n<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>\n<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>\n<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>\n<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>\n<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>\n<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>\n<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>\n<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>\n<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>\n<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>\n<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>\n<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>\n<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>\n<p>Excellent written, analytical, and communication skills</p>\n<p>Strong process and/or project delivery discipline</p>\n<p>Eager to learn new technologies and adapt to evolving customer needs</p>\n<p>We&#39;d Be Extra Excited For Someone Who Has</p>\n<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>\n<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>\n<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>\n<p>Familiar with analytics best practices across business segments and verticals</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_477d343e-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7506821","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data architecture","governance","instrumentation","business outcomes","data querying","modeling","transforming","SQL","dbt","Python","Business Intelligence tools","Product Analytics tools"],"x-skills-preferred":["databases","cloud data warehouses","Google Cloud","Amazon Redshift","Microsoft Azure","Snowflake","Databricks","SDKs","Customer Data Platforms","Event Streaming","Reverse ETL"],"datePosted":"2026-04-18T15:57:25.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a28e2377-728"},"title":"Lead Engagement Manager, Healthcare & Life Sciences","description":"<p>We are seeking a Lead Engagement Manager to deliver measurable impact for our Healthcare &amp; Life Sciences customers. As a Lead Engagement Manager, you will own and independently scope multi-quarter, highly complex initiatives to solve problems with no clear precedent, operating with high autonomy in high-stakes environments.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Drive excellence at the BU/Region level, contributing to team-level booking targets and consumption goals.</li>\n<li>Navigate complex regional, regulatory, or market-specific complexities across multiple countries.</li>\n<li>Coach and support junior Engagement Managers, acting as a go-to resource for peers on complex projects.</li>\n<li>Shape account strategy to align with client goals and lead the sales of complex Professional Services &amp; Training (PS&amp;T) offerings.</li>\n<li>Establish and maintain strategic relationships with senior customer stakeholders across multiple engagements.</li>\n</ul>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>10+ years of leadership experience in Professional Services.</li>\n<li>Expert-level understanding of PS&amp;T offerings and the ability to customize solutions for complex environments.</li>\n<li>Proficient thought leadership, increasingly viewed as an external expert in the industry.</li>\n<li>Ability to travel up to 25-50% as required.</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<ul>\n<li>Zone 1 Pay Range: $178,800-$245,850 USD</li>\n<li>Zone 2 Pay Range: $161,000-$221,300 USD</li>\n<li>Zone 3 Pay Range: $152,000-$209,000 USD</li>\n<li>Zone 4 Pay Range: $143,000-$196,700 USD</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a28e2377-728","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8438803002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$178,800-$245,850 USD","x-skills-required":["Leadership experience in Professional Services","Expert-level understanding of PS&T offerings","Thought leadership in the industry","Ability to travel up to 25-50%","Databricks architecture, ecosystem, and strategic account planning"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:10.424Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Central - United States; Northeast - United States; Southeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Leadership experience in Professional Services, Expert-level understanding of PS&T offerings, Thought leadership in the industry, Ability to travel up to 25-50%, Databricks architecture, ecosystem, and strategic account planning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":178800,"maxValue":245850,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3a936a2d-e4a"},"title":"Manager, Big Data Architecture – Professional Services (DWH, Data Engineering & Migrations)","description":"<p>As a Manager of Resident Solutions Architects at Databricks, you will provide strategic leadership for delivering professional services engagements to high-value Databricks customers. You will help shape the future big data and machine learning landscape for leading Fortune 500 organisations.</p>\n<p>Part of this role will include a people-leadership capacity, responsible for core aspects of building and managing the Resident Solutions Architect team. Through your oversight and mentorship, this team will guide our largest customers, implementing pipelines spanning data engineering through model building and deployment, plus other technical tasks to help customers get value out of their data with Databricks.</p>\n<p>Beyond people leadership, your responsibilities will include owning the delivery of customer projects in your region to ensure they are managed and delivered to target and exacting standards. You will be an ambassador for Services and their value in the region, will represent the organisation in steering committees, and will work with cross-functional teams and leaders to ensure Services support the development of the local business.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Achieving regional team targets for billable utilisation, hiring and revenue</li>\n<li>Partnering with account executives, customer success and field engineering leaders while guiding Resident Solutions Architects to achieve success with professional services projects with customers</li>\n<li>Helping resolve customer concerns on strategic accounts and professional services engagements</li>\n<li>Analysing operational processes and escalation procedures and performing training needs assessments to identify opportunities for improving service delivery and contributing to customers</li>\n<li>Managing a team of Resident Solution Architects and acting as a supportive manager, including handling escalations, mentoring team members, and building a career path for the assigned team members</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Proven leadership experience in managing and guiding consulting, delivery, or solution architecture teams, ensuring successful project execution and team development</li>\n<li>Strong technical background as a hands-on Solutions Architect, enabling you to effectively support and mentor technical architects under your leadership while driving strategic initiatives</li>\n<li>Experience driving software platform adoption in Fortune 500 organisations in markets such as finance, media, retail, telco, energy, and healthcare</li>\n<li>Implementing a project schedule with experience with customer engagement</li>\n<li>Experience with Databricks products, Spark ecosystem, and direct competitors</li>\n<li>Travel is required up to 10%, more at peak times</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3a936a2d-e4a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8439078002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Leadership","Strategic planning","Project management","Data engineering","Machine learning","Spark ecosystem","Databricks products"],"x-skills-preferred":["Cloud computing","DevOps","Agile methodologies"],"datePosted":"2026-04-18T15:57:06.553Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Leadership, Strategic planning, Project management, Data engineering, Machine learning, Spark ecosystem, Databricks products, Cloud computing, DevOps, Agile methodologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ed4bd662-c67"},"title":"Senior Solutions Architect, Commercial - San Francisco","description":"<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>\n<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>\n<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>\n<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>\n<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>\n<p>Must reside in San Francisco, CA</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ed4bd662-c67","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,000 - $250,000 OTE","x-skills-required":["Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python)","Deep understanding of distributed systems (reliability, observability, and fault tolerance)","Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers","Exceptional time management and prioritization skills with the ability to thrive in high-volume environments","Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration"],"x-skills-preferred":["Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar)","Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.)","Contributions to developer tooling, open source projects, or technical content","Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams","Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)"],"datePosted":"2026-04-18T15:56:33.427Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aeba45bc-3e4"},"title":"Senior Solutions Engineer","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>\n<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>\n<p>Visit mixpanel.com to learn more.</p>\n<p>About the Customer Success &amp; Solutions Engineering Team</p>\n<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customer’s business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>\n<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer’s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer’s Mixpanel investment.</p>\n<p>About the Role</p>\n<p>Our SEs are inquisitive, nimble, and able to clearly articulate the technical benefits and requirements of Mixpanel to developers and product managers, while also communicating the business value of our product to high-level executives. In your first month, you’ll become a Mixpanel expert,both in features and functionality as well as implementation. You’ll have the opportunity to shadow customer calls and demos with current Sales Engineers and Account Executives while learning to articulate our value proposition. You’ll also be trained on Mixpanel’s internal systems and tools to set you up for success.</p>\n<p>Within your first three months, you’ll be directly involved in deal cycles with Commercial Account Executives. You’ll lead the technical qualification for customer use cases and deliver customized demos for prospects. You’ll work directly with leadership at the prospect’s organization to understand business challenges that can be solved through an analytics platform and consult on how Mixpanel can address those challenges to achieve a strong ROI. You’ll also work with the prospect’s business and technical teams to scope and execute proof-of-concept projects to establish Mixpanel’s value,including consulting on data ingestion methods, overall architecture, success criteria, and rollout strategies for analytics tools across an organization.</p>\n<p>Responsibilities</p>\n<p>Serve as a trusted technical advisor for prospects, providing strategic consultation on data architecture, governance, instrumentation, and business outcomes.</p>\n<p>Communicate and consult effectively at all levels of the customer’s organization to earn trust and influence buying decisions.</p>\n<p>Bridge the technical-business gap,working with senior stakeholders to define success for proof-of-concepts and ensuring successful execution and outcomes.</p>\n<p>Leverage your Mixpanel expertise and technical/consultative skills to impart best practices throughout proof-of-concept projects.</p>\n<p>Partner with Account Executives to drive revenue growth, serving as the key technical contact for customers.</p>\n<p>Partner with post-sales teams to ensure that pre-sales value propositions translate into tangible post-sales results.</p>\n<p>Develop relationships and uncover the needs of key technical stakeholders within your assigned book of business.</p>\n<p>Be the “Voice of the Prospect” by collecting feedback from potential Mixpanel customers and sharing it with the Product team.</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>The ability to communicate with stakeholders at all levels,from discussing APIs with developers to organizational efficiency with CIOs.</p>\n<p>A demonstrated track record of qualifying and selling technical solutions to executive stakeholders.</p>\n<p>6+ years of experience in a Software-as-a-Service Sales Engineering or related role.</p>\n<p>Experience in data querying, modeling, and transformation using tools such as SQL, dbt, Python, Business Intelligence platforms, or Product Analytics tools.</p>\n<p>Familiarity with databases and cloud data warehouses (e.g., Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks).</p>\n<p>A successful record of experience in sales engineering, customer success, client-facing professional services, consulting, or technical project management.</p>\n<p>Excellent written, analytical, communication, and presentation skills.</p>\n<p>Strong process and project delivery discipline.</p>\n<p>The ability to travel.</p>\n<p>Fluency in multiple languages; German preferred.</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>\n<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>\n<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aeba45bc-3e4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7407407","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","dbt","Python","Business Intelligence platforms","Product Analytics tools","Databases","Cloud data warehouses","Google Cloud","Amazon Redshift","Microsoft Azure","Snowflake","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:33.243Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, dbt, Python, Business Intelligence platforms, Product Analytics tools, Databases, Cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5eeea820-e3b"},"title":"Global Learning Operations & Governance Manager","description":"<p><strong>Job Summary</strong></p>\n<p>The Global Learning Operations &amp; Governance Manager is a senior, enterprise-level leader responsible for architecting the AI-enabled operating system that powers ENGAGE (aka Training Operations Team) globally.</p>\n<p>This role establishes and leads the Global Center of Excellence (CoE), designing a scalable hub-and-spoke model that transforms regional execution into predictable, revenue-aligned, and financially disciplined outcomes.</p>\n<p><strong>Why This Role Matters</strong></p>\n<p>This is a transformation year for Learning &amp; Enablement (L&amp;E). As learner demand, partners, SKUs, and regions scale, traditional operating models no longer hold. Success requires predictive visibility, automation-first execution, and globally consistent governed operations, not incremental process improvement.</p>\n<p><strong>Key Responsibilities</strong></p>\n<p><strong><em> Global Operating Model &amp; CoE Leadership </strong></em></p>\n<ul>\n<li>Establish and lead the Global Center of Excellence as the owner of operating standards, governance, tooling, and analytics.</li>\n<li>Design and operationalize a hub-and-spoke model that balances global consistency with strong regional execution.</li>\n<li>Standardize planning cadences, decision rights, playbooks, and forecasting frameworks.</li>\n<li>Embed accountability and performance rigor into QBRs, MBRs, and global reviews.</li>\n</ul>\n<p><strong><em> AI-Enabled Operations &amp; Telemetry </strong></em></p>\n<ul>\n<li>Enhance and lead ENGAGE’s AI-first operating roadmap.</li>\n<li>Operationalize real-time KPIs, dashboards, and predictive exception signals to surface demand, capacity, and risk.</li>\n<li>Institutionalize intelligent intake, agentic workflows, and automation across scheduling, forecasting, certification, and partner management.</li>\n<li>Establish governance principles for AI and automation, ensuring reliability, transparency, and measurable business impact.</li>\n</ul>\n<p><strong><em> Commercial Strategy &amp; Revenue Discipline </strong></em></p>\n<ul>\n<li>Define and institutionalize best in class revenue practices, including SKU governance, forecasting discipline, revenue recognition guardrails, and margin optimization.</li>\n<li>Translate operational gains into measurable commercial impact (capacity unlock, improved utilization, fill-rate gains, partner spend reduction).</li>\n<li>Partner with Revenue Operations, Finance, and Commercialization to ensure executive-grade reporting and P&amp;L transparency.</li>\n<li>Embed financial controls aligned with audit readiness and long-term scalability.</li>\n</ul>\n<p><strong><em> Enterprise Governance &amp; Risk Management </strong></em></p>\n<ul>\n<li>Build a globally consistent environment across planning, reporting, exception management, and automation oversight.</li>\n<li>Ensure audit resilience, compliance readiness, and structured risk mitigation across regions.</li>\n<li>Establish clear decision rights and ownership models to eliminate ambiguity and friction.</li>\n</ul>\n<p><strong><em> Change Leadership &amp; Global Oversight </strong></em></p>\n<ul>\n<li>Lead global adoption of standardized processes, AI-enabled workflows, and governance frameworks.</li>\n<li>Define and elevate operational maturity benchmarks across regions.</li>\n<li>Act as a strategic advisor to L&amp;E and Strat-Ops leadership on scale, risk, and operating evolution.</li>\n<li>Influence without authority across distributed, matrixed teams to embed discipline and alignment.</li>\n</ul>\n<p><strong>What Success Looks Like</strong></p>\n<ul>\n<li>Building a predictable, standardized global operating model with clear decision rights and executive-ready reporting.</li>\n<li>Real-time, predictive visibility into global demand, capacity, performance, and exceptions.</li>\n<li>Demonstrable conversion of operational improvements into capacity unlocks, reduced partner spend, and revenue uplift.</li>\n<li>Automation and AI embedded into daily operations, materially reducing manual overhead and cycle times.</li>\n<li>High regional adoption of CoE standards and tooling.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>10+ years in GTM Operations, Revenue Operations, or Program Leadership within global high-growth technology companies.</li>\n<li>Proven experience building and scaling enterprise operating models and governance programs.</li>\n<li>Demonstrated success leading automation, analytics, or AI-driven transformation initiatives with measurable impact.</li>\n<li>Deep commercial fluency, including forecasting, revenue operations, utilization economics, and P&amp;L drivers.</li>\n<li>Strong partnership experience with Finance, Legal, Strategy, and Revenue leadership.</li>\n<li>Exceptional analytical rigor and executive-level communication skills.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Exposure to audit processes, SOX readiness, or public-company financial controls.</li>\n<li>Familiarity with SaaS, data, AI, or cloud business models.</li>\n<li>Experience leading global change management initiatives.</li>\n<li>Hands-on experience with analytics platforms or Databricks products.</li>\n<li>Demonstrated success embedding AI into operational workflows at scale.</li>\n</ul>\n<p><strong>Why Join Us</strong></p>\n<p>You will architect the global operating system that enables ENGAGE to scale with speed, discipline, and financial rigor. This is a high-visibility role with enterprise impact,shaping how Learning &amp; Enablement operates in an AI-first world. If you’re energized by building systems that convert complexity into predictable performance, and by embedding governance without sacrificing agility, this role offers the opportunity to leave a lasting operational legacy.</p>\n<p><strong>Pay Range Transparency</strong></p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $148,600-$204,250 USD Zone 2 Pay Range $133,700-$183,800 USD Zone 3 Pay Range $126,200-$173,600 USD Zone 4 Pay Range $118,900-$163,450 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5eeea820-e3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8431973002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$148,600-$204,250 USD","x-skills-required":["GTM Operations","Revenue Operations","Program Leadership","Automation","Analytics","AI","Forecasting","Utilization Economics","P&L Drivers"],"x-skills-preferred":["Audit Processes","SOX Readiness","Public-Company Financial Controls","SaaS","Data","Cloud Business Models","Global Change Management Initiatives","Analytics Platforms","Databricks Products"],"datePosted":"2026-04-18T15:56:30.476Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GTM Operations, Revenue Operations, Program Leadership, Automation, Analytics, AI, Forecasting, Utilization Economics, P&L Drivers, Audit Processes, SOX Readiness, Public-Company Financial Controls, SaaS, Data, Cloud Business Models, Global Change Management Initiatives, Analytics Platforms, Databricks Products","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":148600,"maxValue":204250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a154c39-08a"},"title":"Senior Machine Learning Platform Engineer (Platform)","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We&#39;re seeking a Senior Machine Learning Platform Engineer to join our Machine Learning Platform team. The team builds the foundational components for feature engineering and training/serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains.</p>\n<p>As a Senior Machine Learning Platform Engineer, you will:</p>\n<p>Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance.</p>\n<p>What we look for in you:</p>\n<p>5+ yrs of industry experience as a Software Engineer. Strong understanding of distributed systems. Lead by example through high quality code and excellent communication skills. Great sense of design, and can bring clarity to complex technical requirements. Treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. Mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>\n<p>Nice to haves:</p>\n<p>Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB).</p>\n<p>Job ID: P75535</p>\n<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)). Annual base salary range (excluding equity and bonus): $186,065-$225,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a154c39-08a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7604203","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$186,065-$225,000 USD","x-skills-required":["distributed systems","high-quality code","excellent communication skills","design","fundamentals","generative AI tools","copilots"],"x-skills-preferred":["ML models","ML systems","platform team","developer tooling","Python","Golang","Ray","Tecton","Spark","Airflow","Databricks","Snowflake","DynamoDB"],"datePosted":"2026-04-18T15:56:24.447Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, high-quality code, excellent communication skills, design, fundamentals, generative AI tools, copilots, ML models, ML systems, platform team, developer tooling, Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, DynamoDB","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":186065,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_df625390-342"},"title":"AI Analytics Engineer (Marketing Analytics)","description":"<p>We&#39;re seeking an AI Analytics Engineer to join our Data Science &amp; Analytics team. As a high-impact, early-career role, you will be responsible for building the canonical data infrastructure, owning critical dashboards, and enabling Marketing stakeholders to execute faster, more confident, data-driven decisions.</p>\n<p>You will design and maintain trustworthy data models for core marketing metrics, manage the full lifecycle from prototyping through production, and develop and govern dbt data pipelines. You will also build and optimize dashboards that deliver real-time, self-serve insights across high-priority marketing areas, drive data independence for Marketing stakeholders, and collaborate with the Marketing team and data partners to establish the AI Business Context layer for marketing use cases.</p>\n<p>You will serve as the primary data partner for marketing managers, demand generation teams, and leadership, translating complex data insights into clear business recommendations via dashboards, memos, and presentations. You will achieve a comprehensive mastery of Airtable&#39;s marketing data models, existing pipelines, and BI tools within the first 6 months, becoming the definitive internal expert.</p>\n<p>This is a genuinely AI-native role, requiring active, demonstrated daily use of AI coding tools such as Cursor, Claude, ChatGPT, and Gemini. You must provide specific, concrete examples of how these tools are integral to your work, moving beyond simple familiarity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_df625390-342","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8434307002","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Expert-level SQL","Proficiency with dbt or equivalent data transformation tools","Experience with BI and visualization platforms (Looker, Omni, Tableau, Hex, or similar)","Active, demonstrated daily use of AI coding tools (Cursor, Claude, ChatGPT, Gemini)","Mandatory use of GitHub for version control in a standard development workflow"],"x-skills-preferred":["Python for data work (pandas, ETL scripting, or analysis)","Prior exposure to marketing data concepts: attribution, funnel metrics, lead scoring, or campaign performance","Familiarity with CRM (Salesforce) or marketing automation platforms (Marketo)","Experience with Databricks or cloud data warehouses","A public portfolio showcasing data or AI-assisted engineering work (GitHub, personal projects, Kaggle)"],"datePosted":"2026-04-18T15:56:13.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; Austin, TX; New York, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Expert-level SQL, Proficiency with dbt or equivalent data transformation tools, Experience with BI and visualization platforms (Looker, Omni, Tableau, Hex, or similar), Active, demonstrated daily use of AI coding tools (Cursor, Claude, ChatGPT, Gemini), Mandatory use of GitHub for version control in a standard development workflow, Python for data work (pandas, ETL scripting, or analysis), Prior exposure to marketing data concepts: attribution, funnel metrics, lead scoring, or campaign performance, Familiarity with CRM (Salesforce) or marketing automation platforms (Marketo), Experience with Databricks or cloud data warehouses, A public portfolio showcasing data or AI-assisted engineering work (GitHub, personal projects, Kaggle)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ceb4835-0f1"},"title":"Manager, Professional Services","description":"<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>\n<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>\n<li>6+ years of experience working on Big Data Architectures independently.</li>\n<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>\n<li>Experience working on Databricks platform is a plus.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Willingness to travel for onsite customer engagements within India.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ceb4835-0f1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8503068002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Big Data Technologies","Data Engineering","Data Science","Cloud Technology","People Management","Team Leadership"],"x-skills-preferred":["Databricks","GCP","AWS","Azure","Documentation","White-boarding"],"datePosted":"2026-04-18T15:56:03.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_84875ccd-0a5"},"title":"Senior Partner Marketing Manager","description":"<p>As a Senior Partner Marketing Manager, you&#39;ll play a critical role in shaping and executing co-marketing initiatives with some of our most important technology partners globally.</p>\n<p>Your work will amplify the reach and impact of dbt across the modern data stack, helping to elevate our brand and drive growth through the ecosystem.</p>\n<p>This is a highly cross-functional role where strategic thinking, creativity, and strong collaboration skills will be key to success.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead the creation and execution of marketing campaigns, programs, events, and activities with strategic technology partners.</li>\n<li>Collaborate closely with the Revenue Marketing, Partnerships, and Product Marketing teams to ensure GTM partner plans align with dbt Labs&#39; broader business goals.</li>\n<li>Build and nurture relationships with marketing counterparts at key partners like Snowflake, Google, AWS, Microsoft, and Databricks to align on co-marketing efforts and shared objectives.</li>\n<li>Clearly articulate the value of dbt to partners and support them in promoting the platform internally and to their customer base.</li>\n<li>Own the development of joint messaging and co-branded assets,including blogs, webinars, solution briefs, and presentation decks,ensuring alignment and consistency across all public-facing content.</li>\n<li>Create internal enablement materials to educate and empower sales teams to leverage partner campaigns and initiatives.</li>\n<li>Gain a deep understanding of partner business strategies and priorities; design co-marketing programs that provide mutual value.</li>\n<li>Set and manage OKRs, track program performance, and deliver quarterly reviews with partners to assess impact, identify opportunities, and ensure strategic alignment.</li>\n<li>Develop annual GTM marketing plans tailored to individual partners, accounting for geographic and vertical-specific nuances.</li>\n<li>Exercise strategic judgment in deciding which partner activities to pursue and how best to allocate time and resources for maximum impact.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>8+ years of experience in B2B marketing or similar, particularly within the data or software industry, with a strong track record of building and executing successful partner marketing programs.</li>\n<li>A &#39;builder&#39; mindset: you enjoy solving problems, creating structure where none exists, and working cross-functionally to drive measurable outcomes.</li>\n<li>Deep familiarity with the modern data ecosystem and players such as Snowflake, Google, AWS, Microsoft, and Databricks.</li>\n<li>The ability to navigate complex partner organizations and manage relationships with multiple stakeholders across competing interests.</li>\n<li>Strong storytelling and positioning skills,you know how to distill joint value propositions into compelling messaging and content.</li>\n<li>Comfort operating in a fast-paced, dynamic environment with high levels of ambiguity.</li>\n<li>A broad understanding of integrated marketing strategies, including digital campaigns, field marketing, and industry events.</li>\n<li>Exceptional communication skills, including concise writing and confident presentation abilities, especially with senior stakeholders.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience working asynchronously within a remote, distributed team.</li>\n<li>Prior experience working for or closely with any of dbt Labs&#39; strategic partners.</li>\n<li>Familiarity with the role dbt Labs plays in the cloud data warehouse ecosystem and the modern data stack.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time with a culture that actively encourages time off</li>\n<li>401k plan with 3% guaranteed company contribution</li>\n<li>Comprehensive healthcare coverage</li>\n<li>Generous paid parental leave</li>\n<li>Flexible stipends for:</li>\n<li>Health &amp; Wellness</li>\n<li>Home Office Setup</li>\n<li>Cell Phone &amp; Internet</li>\n<li>Learning &amp; Development</li>\n<li>Office Space</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs total rewards during your interview process.</p>\n<p>In select locations (including Austin, Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>\n<ul>\n<li>The typical starting salary range for this role is: $132,000-$188,700</li>\n<li>The typical starting salary range for this role in the select locations listed is: $147,000-209,000</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_84875ccd-0a5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673163005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$132,000-$188,700","x-skills-required":["B2B marketing","Partner marketing","Digital campaigns","Field marketing","Industry events","Cloud data warehouse ecosystem","Modern data stack","Data engineering","Analytics engineering","Snowflake","Google","AWS","Microsoft","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:34.081Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"B2B marketing, Partner marketing, Digital campaigns, Field marketing, Industry events, Cloud data warehouse ecosystem, Modern data stack, Data engineering, Analytics engineering, Snowflake, Google, AWS, Microsoft, Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":132000,"maxValue":188700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8317ba42-502"},"title":"Senior Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>\n<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n</ul>\n<ul>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n</ul>\n<ul>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n</ul>\n<ul>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n</ul>\n<ul>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n</ul>\n<ul>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n</ul>\n<ul>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n</ul>\n<ul>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n</ul>\n<ul>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n</ul>\n<ul>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>\n</ul>\n<ul>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n</ul>\n<ul>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n</ul>\n<ul>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n</ul>\n<ul>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n</ul>\n<ul>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n</ul>\n<ul>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n</ul>\n<ul>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8317ba42-502","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8041698002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:55:32.901Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c5be5af-04c"},"title":"MBA Intern - Commercial Transformation","description":"<p>As a Commercial Transformation intern at Databricks, you will join our world-class Commercialization team to work on systems and process improvements that make it easier to launch, sell, and scale Databricks products and partner programs.</p>\n<p>During your 12-week internship, you&#39;ll lead high-impact projects,from untangling &#39;messy&#39; operational bottlenecks to architecting the systems that scale our global product launches,turning complex ambiguity into clear execution plans with opportunities for full-time employment within the team the following year.</p>\n<p>Throughout the internship, you will be mentored by a dedicated sponsor and have the opportunity to connect with senior leaders across the organization.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Driving strategic projects by owning end-to-end commercialization transformation projects,from new product introductions to partner programs,by documenting current processes and identifying high-impact opportunities to simplify, automate, and scale.</li>\n</ul>\n<ul>\n<li>Informing high-stakes decisions by providing the analytical rigor and data-driven insights necessary to monitor program health, building lightweight dashboards and metrics tracking that translate raw data into leading indicators for executive leadership.</li>\n</ul>\n<ul>\n<li>Communicating with clarity by synthesizing complex business needs into clear requirements, user stories, and formal documentation (such as BRD/PRD sections) that bridge the gap between technical system requirements and executive-level strategy.</li>\n</ul>\n<ul>\n<li>Enabling the field by ensuring seamless execution of commercial initiatives by producing critical operational deliverables, including process maps, rollout checklists, and User Acceptance Testing (UAT) plans to prepare the field for new product launches.</li>\n</ul>\n<ul>\n<li>Influencing the roadmap by acting as a central point of alignment across RevOps, Sales, Finance, Legal, and Product teams, gathering diverse stakeholder input to ensure internal systems and workflows support the broader commercialization roadmap.</li>\n</ul>\n<p>To be successful in this role, you will need to have:</p>\n<ul>\n<li>A strong interest in enterprise software/SaaS commercialization, operations, systems, and go-to-market strategy.</li>\n</ul>\n<ul>\n<li>Proficiency in Excel/Sheets and experience using AI productivity tools (such as Claude Code and Lovable) to enhance your workflow.</li>\n</ul>\n<ul>\n<li>Previous experience with SQL or Databricks for querying and reporting preferred.</li>\n</ul>\n<ul>\n<li>Previous experience with go-to-market concepts (pricing, packaging, quoting) or platforms like SFDC, CPQ, NetSuite, or Jira preferred.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c5be5af-04c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8402617002","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Excel","AI productivity tools","SQL","Databricks","go-to-market concepts","SFDC","CPQ","NetSuite","Jira"],"x-skills-preferred":["Claude Code","Lovable"],"datePosted":"2026-04-18T15:55:26.007Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"INTERN","occupationalCategory":"Operations","industry":"Technology","skills":"Excel, AI productivity tools, SQL, Databricks, go-to-market concepts, SFDC, CPQ, NetSuite, Jira, Claude Code, Lovable"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_de5cd32c-6db"},"title":"Senior Workforce Intelligence Manager","description":"<p>As a Senior Workforce Intelligence Manager with our People Analytics team, you&#39;ll drive insights and impact across our People (HR) function and all of Databricks. You&#39;ll shape our infrastructure, build and scale analytics tools, and craft models aligned with company goals. You&#39;ll drive impactful research that shapes decisions about how we hire, train, enable, assess, and coach Bricksters - and more.</p>\n<p>You&#39;ll apply a current-state lens to People processes as they exist today while helping to transform our future.</p>\n<p>Impact You&#39;ll Have:</p>\n<ul>\n<li>Drive Execution: Own end-to-end outcomes for major initiatives related to people data - from strategy shaping to requirements gathering to product development to initial launch to over-time optimization.</li>\n<li>Technical Excellence: Develop and enhance our data foundation by building AI-first dashboards, conversational agents, and automated solutions that are scalable and maintainable.</li>\n<li>SME Partnership: Act as a thought partner to decision-makers, translating workforce intelligence into actionable insights that impact the performance of individual employees and the company overall.</li>\n<li>Advanced Analytics: Design and execute complex analyses to predict trends, account for business context, and prescribe actions related to hiring, retention, or employee engagement.</li>\n<li>Automation: Convert recurring analysis requests into self-service tools, enabling leaders to make data-driven decisions independently.</li>\n<li>Enablement &amp; Capability-Building: Support users to effectively interpret, interact with, and act on workforce insights. Produce toolkits, training, and resources that drive people data fluency and impact.</li>\n<li>Product Expertise &amp; Innovation: Gain expertise in, and represent, our company products and services - staying current with features and releases from Databricks while attending to emerging best practices in people analytics and HR.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>Education: You&#39;ve earned a Master&#39;s or PhD in a field related to workforce research and analytics (e.g., I/O Psychology, Sociology, Organizational Behavior).</li>\n<li>Experience: You have 10+ years of professional experience leading large-scale research and analytics projects with/for People/HR functions.</li>\n<li>Technical Skills: You have advanced skills in BI tools (e.g., Databricks AI/BI, Tableau, PowerBI), analytics and modeling, and a strong grasp of SQL.</li>\n<li>Business Acumen: You can identify data gaps in operational processes and systems designs, and can propose solutions to close them.</li>\n<li>Clear Communicator: You are an excellent communicator, with an ability to synthesize complex information into clear, insightful takeaways and action-oriented recommendations.</li>\n<li>Culture Champion: You are ready to collaborate closely with colleagues and network across functional areas to foster Databricks&#39; cultural values.</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $191,100-$262,800 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_de5cd32c-6db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8470516002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191,100-$262,800 USD","x-skills-required":["BI tools","analytics and modeling","SQL","Tableau","PowerBI","Databricks AI/BI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:24.041Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, New York; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"BI tools, analytics and modeling, SQL, Tableau, PowerBI, Databricks AI/BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191100,"maxValue":262800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_973b554f-cde"},"title":"Senior Software Engineer - Backend","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a senior software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>Our backend teams span many domains across our essential service platforms, including distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>You will deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</p>\n<p>You will also build reliable, scalable services using Scala, Kubernetes, and data pipelines using Spark and Databricks to power the pricing infrastructure that serves millions of cluster-hours per day.</p>\n<p>Additionally, you will develop product features that empower customers to easily view and control platform usage.</p>\n<p>We look for candidates with a BS (or higher) in Computer Science or a related field, 3+ years of production-level experience in Java, Scala, C++, or a similar language, experience developing large-scale distributed systems, and good knowledge of SQL.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_973b554f-cde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8029671002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:12.827Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02ba8342-079"},"title":"Specialist Solutions Architect - Data Warehousing (Healthcare & Life Sciences)","description":"<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>\n<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>\n<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>\n<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>\n<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective \t+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools \t+ Experience migrating on-premise EDW workloads to the public cloud</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Production programming experience in SQL and Python, Scala, or Java</li>\n<li>Experience with the AWS, Azure, or GCP clouds</li>\n<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>\n<li>2 years customer-facing experience in a pre-sales or post-sales role</li>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02ba8342-079","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337429002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["data warehousing","cloud data warehousing","Databricks","lakehouse architecture","SQL","Python","Scala","Java","AWS","Azure","GCP","data analytics","NoSQL","MPP","OLTP","OLAP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:06.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2fea360-66e"},"title":"Solutions Architect","description":"<p>We are seeking an experienced Solutions Architect to join our Field Engineering team. As a Solutions Architect, you will work with large enterprises to define and direct the technical strategy for our products and services. You will lead the technical voice for Databricks and help customers evaluate and adopt our solutions as part of their strategy.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading technical teams to implement the technical strategy in the account</li>\n<li>Building a movement of technical champions within the account</li>\n<li>Aligning technical strategies around Databricks solutions</li>\n<li>Providing structured mentorship for other team members</li>\n</ul>\n<p>The ideal candidate will have a solid background in value selling, technical account management, and technical leadership. They will also have experience working with large, global accounts and a strong understanding of big data, data science, and cloud technologies.</p>\n<p>Competencies:</p>\n<ul>\n<li>Proficiency at establishing virtual teams and leading them to ultimate success within the account</li>\n<li>Experience working with large, global accounts</li>\n<li>Forming relationships with executives and influencers</li>\n<li>Presenting a convincing point-of-view to important decision-makers</li>\n<li>Technical expertise in big data, data science, and cloud</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $180,000-$247,500 USD per year, depending on location and experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2fea360-66e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8243219002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["big data","data science","cloud","technical leadership","value selling","technical account management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:53:30.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud, technical leadership, value selling, technical account management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58a44dab-91a"},"title":"Partner Solutions Architect - Japan","description":"<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>\n<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>\n<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>\n<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>\n<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>\n<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>\n<p>What you&#39;ll do:</p>\n<ul>\n<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>\n</ul>\n<ul>\n<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>\n</ul>\n<ul>\n<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>\n</ul>\n<ul>\n<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>\n</ul>\n<ul>\n<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>\n</ul>\n<ul>\n<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>\n</ul>\n<ul>\n<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>\n</ul>\n<ul>\n<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>\n</ul>\n<ul>\n<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>\n</ul>\n<ul>\n<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>\n</ul>\n<ul>\n<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>\n</ul>\n<ul>\n<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>\n</ul>\n<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>\n<p>What you&#39;ll need:</p>\n<ul>\n<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>\n</ul>\n<ul>\n<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>\n</ul>\n<ul>\n<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>\n</ul>\n<ul>\n<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>\n</ul>\n<ul>\n<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>\n</ul>\n<ul>\n<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>\n</ul>\n<ul>\n<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>\n</ul>\n<ul>\n<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>\n</ul>\n<ul>\n<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>\n</ul>\n<ul>\n<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>\n</ul>\n<p>What will make you stand out:</p>\n<ul>\n<li>Experience working directly in partner, alliance, or ecosystem roles</li>\n</ul>\n<ul>\n<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>\n</ul>\n<ul>\n<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>\n</ul>\n<ul>\n<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>\n</ul>\n<ul>\n<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>\n</ul>\n<ul>\n<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>\n</ul>\n<ul>\n<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>\n</ul>\n<ul>\n<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>\n</ul>\n<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>\n<ul>\n<li>Interview with Talent Acquisition Partner</li>\n</ul>\n<ul>\n<li>Interview with Hiring Manager</li>\n</ul>\n<ul>\n<li>Team Interviews</li>\n</ul>\n<ul>\n<li>Demo Round</li>\n</ul>\n<p>#LI-LA1</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58a44dab-91a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data modeling","analytics engineering","modern data platforms","Snowflake","Databricks","Google Cloud","partner engineering","customer-facing technical role"],"x-skills-preferred":["cloud marketplace motions","co-sell programs","partner-sourced pipeline generation","dbt","analytics engineering workflows","transformation","orchestration","governance","metadata"],"datePosted":"2026-04-18T15:53:29.744Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af1b7253-e81"},"title":"Head of Growth & Marketing Analytics","description":"<p>We&#39;re seeking a Growth &amp; Marketing Analytics Lead to partner closely with the CMO and product leadership to drive user growth and engagement. This leader will shape and drive the analytics strategy, mentor and develop the team, and collaborate cross-functionally with Marketing, Product and Engineering to translate business needs into actionable insights.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Team Leadership &amp; Mentorship: Hire, lead, coach, and develop a high-performing team of product and marketing analysts.</li>\n<li>Analytics Strategy &amp; Execution: Own the company&#39;s marketing measurement, experimentation, and insight generation for paid, owned, and integrated channels, and for performance across acquisition, activation, engagement, retention, and LTV.</li>\n<li>Hands-On Analysis: Perform deep-dive analyses to uncover insights that inform product and business decisions.</li>\n<li>Cross-Functional Collaboration: Act as a strategic thought partner to identify opportunities, measure success, and optimize Marketing and product performance.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>10+ years of experience in analytics, with at least 2 years in a people management role.</li>\n<li>Strong technical skills in SQL, Python, data visualization tools (Amplitude, Tableau) and experimentation.</li>\n<li>Has direct experience in marketing analytics using Media Mix Models (MMM), difference-in-differences and other causal inference techniques.</li>\n<li>Proven track record of leading high-performing marketing or product analytics teams.</li>\n<li>Excellent communication and storytelling skills.</li>\n<li>Ability to balance strategic thinking with hands-on execution.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af1b7253-e81","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7729952","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$215,000 to $263,000","x-skills-required":["SQL","Python","data visualization tools (Amplitude, Tableau)","experimentation","Media Mix Models (MMM)","difference-in-differences","causal inference techniques"],"x-skills-preferred":["fintech","consumer tech","data-driven product organization","modern data stacks (e.g., Databricks, Tableau, Amplitude)","influencing executive stakeholders","cross-functional initiatives"],"datePosted":"2026-04-18T15:53:15.064Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Finance","skills":"SQL, Python, data visualization tools (Amplitude, Tableau), experimentation, Media Mix Models (MMM), difference-in-differences, causal inference techniques, fintech, consumer tech, data-driven product organization, modern data stacks (e.g., Databricks, Tableau, Amplitude), influencing executive stakeholders, cross-functional initiatives","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":215000,"maxValue":263000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e0d905ee-b53"},"title":"Solutions Architect","description":"<p>We are looking for an experienced pre-sales professional to join our Field Engineering department. As a Solutions Architect, you will work with our Enterprise Account Executive to define and direct the technical strategy for our largest and most important accounts. Your goal will be to help these accounts become more data-driven and adopt our products, particularly ML &amp; AI.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Working with 1-5 clients as the main technical voice for Databricks</li>\n<li>Leading customers on a transformational journey to evaluate and adopt Databricks as part of their strategy</li>\n<li>Implementing the technical strategy in the account, in close understanding of the strategy</li>\n<li>Building a movement of technical champions within the account</li>\n<li>Aligning technical strategies around Databricks solutions</li>\n<li>Providing structured mentorship for other team members</li>\n</ul>\n<p>To succeed in this role, you will need to be proficient in establishing virtual customer collaboration and leading them to ultimate success within the account. You should also have experience working with very large (&gt; $1m ARR), global accounts and forming relationships with executives and influencers.</p>\n<p>As a technical expert in big data, data science, and cloud, you will present a convincing point-of-view to important decision-makers that leads them down a path of success. You will also have an ability in data-driven business transformation and driving change with data.</p>\n<p>Nice to have: Databricks Certification.</p>\n<p>Benefits and perks are available to all employees, and Databricks is committed to fostering a diverse and inclusive culture where everyone can excel.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e0d905ee-b53","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6356702002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Technical account management","Value selling","Technical leadership","Big data","Data science","Cloud"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:52:15.935Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical account management, Value selling, Technical leadership, Big data, Data science, Cloud, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_276f3a05-2e9"},"title":"Field CTO - America Industries","description":"<p>We are seeking a Field Chief Technology Officer (Field CTO) for the Americas Industries Business Unit to be a senior, customer-facing technology and business transformation thought leader for our most strategic, often global, accounts in regulated industries.</p>\n<p>This individual contributor role sits at the intersection of data and AI strategy, industry transformation, and executive relationship-building, working closely with C-level leaders to drive multi-year change on the data platform while representing real-world needs back into Databricks.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building and maintaining trusted-advisor relationships with C-level executives in large US-based and global accounts, especially in highly regulated industries.</li>\n<li>Cultivating a strong social and professional network across customer executives, boards, key industry bodies, and partners.</li>\n<li>Shaping executive thinking on modern data and AI architectures, with emphasis on Lakehouse and data platform modernization as the primary lever for long-term Gen AI impact.</li>\n<li>Leading C-level briefings, strategy sessions, and multi-day workshops that connect business outcomes, regulatory constraints, and operating model change to concrete Databricks-based roadmaps.</li>\n<li>Serving as a deep technical counterpart in the field, maintaining L200–L300 proficiency across Databricks products and being able to credibly engage architects, data engineers, and data scientists on solution design and trade-offs.</li>\n<li>Generalizing patterns from the field into reusable reference architectures, industry blueprints, and best practices for regulated industries, and sharing them through blogs, webinars, whitepapers, and conference keynotes.</li>\n<li>Orchestrating the broader ecosystem (cloud providers, GSIs, consultancies, ISVs) around customer objectives, ensuring Databricks is at the center of multi-year transformation programs rather than isolated projects.</li>\n<li>Partnering with Account Executives, Solutions Architects, Industry Leads, and Product Specialists to drive complex, multi-year sales cycles, securing platform decisions and expansions while influencing ACV and consumption growth.</li>\n<li>Providing structured, prioritized feedback from strategic customers into Product, Engineering, and Field leadership to influence product roadmap, especially around data, governance, security, and regulated-industry requirements.</li>\n<li>Mentoring senior Field Engineering and industry-focused talent, contributing to a pipeline of principal- and CTO-level leaders and codifying ways of working for complex, regulated accounts.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>15+ years of experience spanning enterprise technology and consulting, including leading or advising on multi-year data platform and analytics transformations in large, complex organizations.</li>\n<li>Significant time spent inside a large enterprise software or cloud company in roles that required navigating matrixed organizations and driving change at scale, combined with direct industry exposure rather than a career spent solely in horizontal software.</li>\n<li>Experience in or with regulated industries, with familiarity with regulatory and compliance considerations affecting data and AI platforms.</li>\n<li>A background that blends hands-on technology and architecture work on data platforms and analytics, organizational and operating model change, executive consulting or advisory, and proven ability to operate as a highly credible peer to C-level executives.</li>\n<li>Strong, proactive networker who is naturally curious about which associations, councils, and forums matter for a given customer set, and who uses those networks to create new executive entry points and opportunities.</li>\n<li>Demonstrated longevity and impact in prior roles, with evidence of building and sustaining long-term customer relationships and programs rather than frequent short stints.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_276f3a05-2e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8306218002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$249,800-$343,400 USD","x-skills-required":["data and AI strategy","industry transformation","executive relationship-building","Lakehouse and data platform modernization","Gen AI impact","L200–L300 proficiency across Databricks products","solution design and trade-offs","reference architectures","industry blueprints","best practices for regulated industries","cloud providers","GSIs","consultancies","ISVs","complex, multi-year sales cycles","platform decisions and expansions","ACV and consumption growth","product roadmap","data governance","security","regulated-industry requirements"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:59.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data and AI strategy, industry transformation, executive relationship-building, Lakehouse and data platform modernization, Gen AI impact, L200–L300 proficiency across Databricks products, solution design and trade-offs, reference architectures, industry blueprints, best practices for regulated industries, cloud providers, GSIs, consultancies, ISVs, complex, multi-year sales cycles, platform decisions and expansions, ACV and consumption growth, product roadmap, data governance, security, regulated-industry requirements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":249800,"maxValue":343400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7be2f955-b0a"},"title":"Machine Learning Intern","description":"<p>As a Fintech company where Machine Learning (ML) is a key driver of growth, our operations highly rely on machine learning models, from business decisions to customer experiences. We seek talented and motivated students and recent graduates with a strong background in machine learning, deep learning, language models, and generative AI, programming, and data analysis to join our 12-week Machine Learning Internship Program.</p>\n<p>You will work on real-world projects, collaborate with experienced professionals, gain valuable experience in the fintech industry, and realise business and social impact. This role requires hybrid work from our Mountain View office, with 2 days a week in person. This internship will pay $40 per hour, with an expected 40 hours per week for the 12-week program.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Train and fine-tune large-scale Foundation Models to support various fintech product use cases</li>\n<li>Work with a large dataset, including structured and unstructured data</li>\n<li>Help in ensuring improvements in our current ML systems via model, data, or experimentation upgrades</li>\n<li>Gain hands-on experience with a wide array of technologies, including PyTorch, AWS, Kafka, Databricks, etc</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Actively pursuing a Master&#39;s or PhD in Computer Science, Information Technology, or a related field</li>\n<li>Located in Mountain View, or have the ability to relocate there, for the duration of the internship</li>\n<li>Strong understanding of statistical models, familiarity, and in-depth understanding of machine learning and deep learning algorithms. Familiarity with training or fine-tuning large-scale models, Sequence Transformer models</li>\n<li>Interest in multimodal or multitask learning across structured, sequential, and behavioural data</li>\n<li>Familiarity with AI tools, harness engineering, agentic workflow, etc.</li>\n<li>Hands-on programming experience in Python and ML frameworks such as PyTorch</li>\n<li>Equipped with good verbal and written communication skills</li>\n<li>A background demonstrating strong problem-solving skills</li>\n<li>Committed to taking ownership of projects, conducting thorough investigations, and driving initiatives to conclusion</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7be2f955-b0a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7770051","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"internship","x-salary-range":"$40 per hour","x-skills-required":["machine learning","deep learning","language models","generative AI","programming","data analysis","PyTorch","AWS","Kafka","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:48.998Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, US"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Finance","skills":"machine learning, deep learning, language models, generative AI, programming, data analysis, PyTorch, AWS, Kafka, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_73b18ca5-83e"},"title":"Customer Systems Administrator","description":"<p>As a Customer Systems Administrator on the Customer Systems team within Dropbox&#39;s Go-To-Market organization, you will play a crucial role in platform administration, access governance, and operational reliability across the systems that power Dropbox&#39;s customer-facing operations.</p>\n<p>Day to day, you&#39;ll be configuring and maintaining platforms across the customer lifecycle, enforcing security and access control policies, and driving AI-powered tooling and automations. You&#39;ll gain direct ownership of production systems that the business relies on daily, deep cross-functional exposure to Engineering and Customer Experience and Success, access to industry-leading AI tooling, and the opportunity to shape how customer tooling scales at Dropbox.</p>\n<p>Responsibilities:</p>\n<p>Manage and evolve the team&#39;s portfolio of support and post-sales platforms to meet the changing needs of the Customer Experience and Customer Success organizations.</p>\n<p>Serve as a primary on-call responder, owning incident resolution and stakeholder communication during platform disruptions.</p>\n<p>Drive the adoption and integration of AI-powered tools and automations across customer platforms to improve user efficiency and customer outcomes.</p>\n<p>Partner with Engineering and Customer Experience and Success to translate business needs into platform solutions.</p>\n<p>Identify and execute platform improvements that reduce operational friction, increase reliability, and scale with the business.</p>\n<p>Build and maintain the team&#39;s automation tooling and scripting infrastructure.</p>\n<p>Establish and maintain the documentation standards that keep the team operationally resilient, including SOPs, runbooks, and system guides.</p>\n<p>Requirements:</p>\n<p>4+ years of proven experience administering enterprise support or customer platforms in a production environment.</p>\n<p>Proficiency with AI tools and scripting languages (e.g., Python, Bash, JavaScript), with a demonstrated comfort incorporating both into daily workflows to increase efficiency and output.</p>\n<p>Demonstrated experience managing multi-platform environments.</p>\n<p>Experience governing platform access , provisioning, role-based access control, and security policy enforcement.</p>\n<p>Track record of shipping platform changes through structured processes: scoping, testing, communicating, and deploying without disruption.</p>\n<p>Operational maturity with demonstrated experienced owning incident response, triaging escalations, and maintaining composure under pressure.</p>\n<p>Ability to produce and maintain clear, structured documentation , SOPs, runbooks, and system guides that are accessible to both technical and non-technical audiences.</p>\n<p>Preferred Qualifications:</p>\n<p>Experience administering support platforms such as Zendesk, Amazon Connect, or similar contact center and ticketing systems.</p>\n<p>Proven experience deploying AI tooling or automation frameworks to improve team workflows and operational efficiency.</p>\n<p>Working knowledge of AWS services, cloud infrastructure fundamentals, and familiarity with modern data platforms such as Databricks.</p>\n<p>Experience with sales platforms, supporting Sales &amp; Customer Success operations such as Highspot, Planhat, Outreach.</p>\n<p>Platform admin certifications (e.g., Zendesk Admin, Salesforce Certified Admin).</p>\n<p>Compensation:</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_73b18ca5-83e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7768860","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI tools","scripting languages","Python","Bash","JavaScript","multi-platform environments","platform access","role-based access control","security policy enforcement","incident response","triaging escalations","composure under pressure","clear documentation","SOPs","runbooks","system guides"],"x-skills-preferred":["Zendesk","Amazon Connect","AWS services","cloud infrastructure fundamentals","Databricks","Highspot","Planhat","Outreach","platform admin certifications"],"datePosted":"2026-04-18T15:51:43.963Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI tools, scripting languages, Python, Bash, JavaScript, multi-platform environments, platform access, role-based access control, security policy enforcement, incident response, triaging escalations, composure under pressure, clear documentation, SOPs, runbooks, system guides, Zendesk, Amazon Connect, AWS services, cloud infrastructure fundamentals, Databricks, Highspot, Planhat, Outreach, platform admin certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a02999d2-33b"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>The impact you&#39;ll have is significant, spanning many domains across our essential service platforms. You might work on challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>What we look for in a candidate includes:</p>\n<ul>\n<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>\n</ul>\n<ul>\n<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>\n</ul>\n<ul>\n<li>Experience developing large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<ul>\n<li>Good knowledge of SQL.</li>\n</ul>\n<p>Benefits at Databricks include comprehensive benefits and perks that meet the needs of all employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a02999d2-33b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7984907002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","distributed systems","at-scale service architecture and monitoring","workflow orchestration","developer experience","cloud storage backends","AWS S3","Azure Blob Store","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:34.292Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, distributed systems, at-scale service architecture and monitoring, workflow orchestration, developer experience, cloud storage backends, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9494a56c-104"},"title":"Enterprise Account Executive - FSI","description":"<p>As an Enterprise Account Executive, your mission will be to acquire and grow some of our focus customers in Databricks. You will secure new logos with strategic accounts in the FSI vertical, create a focused and targeted logo acquisition strategy, and expand Databricks use cases in existing and newly won accounts to maximize impact for customers.</p>\n<p>You will establish essential C-Level relationships, including Engineering, Platform, Architecture, and Data Science roles, who will, in turn, be your industry advocates. You will build value in all engagements to guide successful negotiations, plan, document, and drive the growth of Databricks usage for your customers.</p>\n<p>You will develop a deep and detailed understanding of the customer&#39;s business, provide leadership to the customer, important staff, and technical teams, and identify all important data use cases and buying centers in an opportunity, to increase the impact of Databricks in an organization.</p>\n<p>Orchestrate and use Databricks teams and ecosystem partners to maximize your impact on your sales motions. You will close new accounts while growing our business in existing accounts.</p>\n<p>The ideal candidate will have 5+ years of experience selling multi-million dollar complex software deals to the region&#39;s most recognizable organizations within the Enterprise segment, selling experience to the Enterprise FSI customers in Japan, knowledge of segment priorities, buying processes, buying cycles, decision-making process, and the ecosystem.</p>\n<p>Background of selling usage-based SaaS solutions, or other Data/AI/ML technologies, 5+ years of experience in sales methodologies and processes, e.g., account planning, MEDDPICC, value selling, and Command of the Message.</p>\n<p>5+ years of experience in hunting in a greenfield space while also consulting with existing accounts to expand further use, build customer champions and collaborative teams to support the implementation of the expansion plan.</p>\n<p>Understanding of how to develop a clear partner strategy and manage it to success.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9494a56c-104","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8440900002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Enterprise Sales","FSI Vertical","Databricks","Sales Methodologies","Account Planning","MEDDPICC","Value Selling","Command of the Message","Usage-Based SaaS Solutions","Data/AI/ML Technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:22.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Enterprise Sales, FSI Vertical, Databricks, Sales Methodologies, Account Planning, MEDDPICC, Value Selling, Command of the Message, Usage-Based SaaS Solutions, Data/AI/ML Technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_328a534b-bac"},"title":"Customer Sales Director (Austin, TX)","description":"<p>We are looking for a Customer Sales Director to focus on an at-scale strategy to support, retain, and grow a mix of our Commercial and Enterprise customer base. This role is a hybrid-based role in Austin, Texas.</p>\n<p>The ideal candidate will have 4+ years of experience in SaaS sales or account management, with a proven track record of exceeding targets. They will be able to build a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts, manage multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts</li>\n<li>Managing multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas</li>\n<li>Protecting renewals by monitoring account signals, deepening executive alignment, and helping customers realize consistent value</li>\n</ul>\n<p>The successful candidate will have strong consultative selling skills, engaging effectively with both technical and business audiences. They will be proactive and organized, capable of independently managing a diverse book of business.</p>\n<p>Preferred qualifications include prior experience in analytics, ETL, BI, or open-source software, familiarity with dbt (core or Cloud) and the modern data stack, including platforms like Snowflake, BigQuery, Redshift, or Databricks, experience with consumption and/or usage-based pricing structures, and experience with the MEDD(P)ICC sales methodology / Command of the Message.</p>\n<p>Benefits include unlimited vacation time, 401k plan with 3% guaranteed company contribution, comprehensive healthcare coverage, generous paid parental leave, flexible stipends for health &amp; wellness, home office setup, cell phone &amp; internet, learning &amp; development, and office space.</p>\n<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_328a534b-bac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4616931005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SaaS sales","account management","analytics","ETL","BI","open-source software","dbt","Snowflake","BigQuery","Redshift","Databricks","consumption and/or usage-based pricing structures","MEDD(P)ICC sales methodology / Command of the Message"],"x-skills-preferred":["prior experience in analytics","familiarity with dbt (core or Cloud)","experience with consumption and/or usage-based pricing structures"],"datePosted":"2026-04-18T15:51:03.617Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"SaaS sales, account management, analytics, ETL, BI, open-source software, dbt, Snowflake, BigQuery, Redshift, Databricks, consumption and/or usage-based pricing structures, MEDD(P)ICC sales methodology / Command of the Message, prior experience in analytics, familiarity with dbt (core or Cloud), experience with consumption and/or usage-based pricing structures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62900fcd-562"},"title":"Security Engineer - Offensive Security","description":"<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>\n<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>\n<p>Responsibilities:</p>\n<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>\n<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>\n<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>\n<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>\n<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>\n<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>\n<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>\n<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>\n<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>\n<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>\n<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>\n<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>\n<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>\n<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62900fcd-562","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7820898","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Go","Web application security","Cloud platforms (AWS, Azure, or GCP)","Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound)","Adversary tradecraft and frameworks (MITRE ATT&CK)","Excellent written and verbal communication skills"],"x-skills-preferred":["Experience conducting offensive security in fintech, financial services, or other highly regulated environments","Background in vulnerability research, exploit development, or CVE discovery","Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations)","Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support","Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"],"datePosted":"2026-04-18T15:51:01.913Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2b1604a-c20"},"title":"Applied AI Engineer","description":"<p>We are seeking an Applied AI Engineer to join our team at Komodo Health. As an Applied AI Engineer, you will design and deploy end-to-end AI solutions that power real products and internal tools. You&#39;ll work at the intersection of applied research, engineering, and product development, bringing modern AI techniques into scalable production systems.</p>\n<p>You will collaborate closely with product, platform, and data teams to build AI capabilities that transform how healthcare data is explored, understood, and operationalized. Your work will involve designing, building, and deploying agent-based AI pipelines integrated into real customer-facing products, as well as building internal AI productivity tools that accelerate engineering workflows across Komodo.</p>\n<p>In this role, you will have the opportunity to work on a wide range of projects, from developing AI-powered applications to integrating AI capabilities across backend services and product interfaces. You will also contribute reusable patterns to Komodo&#39;s AI infrastructure and internal tooling ecosystem.</p>\n<p>To be successful in this role, you will need to have experience building production-grade AI systems or AI-powered applications, strong proficiency in Python, and experience working with LLMs, prompt engineering, or agent-based architectures. You will also need to be able to integrate AI capabilities across backend services and product interfaces, and have experience designing evaluation frameworks, testing strategies, or monitoring systems for AI features.</p>\n<p>If you are passionate about using AI to drive innovation and improvement in healthcare, and have the skills and experience to succeed in this role, we encourage you to apply.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Design and deploy end-to-end AI solutions that power real products and internal tools</li>\n<li>Collaborate closely with product, platform, and data teams to build AI capabilities that transform how healthcare data is explored, understood, and operationalized</li>\n<li>Develop agent-based AI pipelines integrated into real customer-facing products</li>\n<li>Build internal AI productivity tools that accelerate engineering workflows across Komodo</li>\n<li>Contribute reusable patterns to Komodo&#39;s AI infrastructure and internal tooling ecosystem</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience building production-grade AI systems or AI-powered applications</li>\n<li>Strong proficiency in Python</li>\n<li>Experience working with LLMs, prompt engineering, or agent-based architectures</li>\n<li>Ability to integrate AI capabilities across backend services and product interfaces</li>\n<li>Experience designing evaluation frameworks, testing strategies, or monitoring systems for AI features</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Healthcare data expertise</li>\n<li>Experience with distributed computing frameworks (e.g., Spark, Snowflake, Databricks) for large-scale data processing</li>\n</ul>\n<p><strong>Location</strong></p>\n<p>This role is located in San Francisco, California, and is available for remote work.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Comprehensive health, dental, and vision insurance</li>\n<li>Flexible time off and holidays</li>\n<li>401(k) with company match</li>\n<li>Disability insurance and life insurance</li>\n<li>Leaves of absence in accordance with applicable state and local laws and regulations and company policy</li>\n</ul>\n<p><strong>Equal Opportunity Employer</strong></p>\n<p>Komodo Health is an equal opportunity employer and welcomes applications from all qualified candidates. We are committed to diversity and inclusion in the workplace and strive to create a work environment that is free from discrimination and harassment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2b1604a-c20","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Komodo Health","sameAs":"https://www.komodohealth.com/","logo":"https://logos.yubhub.co/komodohealth.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/komodohealth/jobs/8512178002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$191,000 - $224,000 per year","x-skills-required":["Python","LLMs","Prompt Engineering","Agent-Based Architectures","Distributed Computing Frameworks","Spark","Snowflake","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:46.591Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Python, LLMs, Prompt Engineering, Agent-Based Architectures, Distributed Computing Frameworks, Spark, Snowflake, Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":224000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_245a7b5f-cac"},"title":"Staff Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a Staff Software Engineer at Databricks India, you can get to work across various domains, including backend infrastructure, distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>Our Infrastructure Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as:</p>\n<ul>\n<li>Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS (or higher) in Computer Science, or a related field</li>\n</ul>\n<ul>\n<li>12+ years of production level experience in one of: Python, Java, Scala, C++, or similar language</li>\n</ul>\n<ul>\n<li>6+ years experience developing large-scale distributed systems from scratch</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n</ul>\n<ul>\n<li>Experience working on Infrastructure related projects is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_245a7b5f-cac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7648674002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Kubernetes","Apache Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:04.399Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Kubernetes, Apache Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ae53b8d4-8fd"},"title":"Sr. AI Engineer, Application Engineering","description":"<p>We&#39;re hiring a Sr. AI Engineer to join our IT department to manage AI agentic deployments and deliver real impact for our customers. As a Sr. AI Engineer, you will design and develop tailored solutions using the Elastic Agent Builder platform and related technologies, guide technical engagements, and support the growth of junior engineers.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Technical Delivery and Implementation: Own the end-to-end technical delivery of AI solutions, including writing code, configuring systems, and resolving issues, while reviewing the work of junior team members to ensure quality deployment and measurable business impact.</li>\n</ul>\n<ul>\n<li>AI Solution Development: Take ownership of designing and implementing scalable production systems, including AI and Large Language Model (LLM) based intelligent agents and automated workflows built on the Salesforce platform.</li>\n</ul>\n<ul>\n<li>Custom Agentic AI Engineering: Work directly with stakeholders to design and build custom intelligent agents using the Elastic Agent Builder platform, ensuring solutions meet unique business requirements and integrate smoothly with existing tool ecosystems.</li>\n</ul>\n<ul>\n<li>Data Configuration and Integration: Own the full data lifecycle, from data model design to building efficient processing pipelines and establishing integration strategies. Ensure data is optimized and secure for AI applications, including in complex enterprise environments.</li>\n</ul>\n<ul>\n<li>Technical Problem Solving: Identify, analyze, and resolve technical challenges across all phases of solution delivery, from data integration to model deployment and agent orchestration. Serve as a reliable resource for unblocking progress.</li>\n</ul>\n<ul>\n<li>Agentic Innovation: Develop expertise in the Elastic platform, pushing its capabilities forward. Lead the development of custom intelligent agents, automate business processes, and shape user experiences. Insights from the field will directly influence product enhancements and platform direction.</li>\n</ul>\n<ul>\n<li>Client Partnership: Embed with client teams to understand their operational challenges and goals. Translate requirements into clear technical designs, build strong relationships, and serve as a trusted technical advisor.</li>\n</ul>\n<ul>\n<li>Debugging and Root Cause Analysis: Perform thorough analysis, debugging, and root cause identification for complex system interactions, data flows, and AI model behaviors to optimize performance and prevent recurring issues.</li>\n</ul>\n<ul>\n<li>Prototyping and Iteration: Rapidly develop proofs-of-concept and minimum viable products, often coding alongside client teams to demonstrate capabilities and gather feedback for iterative refinement.</li>\n</ul>\n<ul>\n<li>Engineering Best Practices: Apply and promote standards for code quality, scalability, security, and maintainability across all deployed solutions.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>At least 5 years&#39; experience in a hands-on, end-to-end delivery role for scalable production solutions in a professional environment</li>\n</ul>\n<ul>\n<li>Expert-level proficiency in one or more programming languages (e.g., JavaScript, Java, Python)</li>\n</ul>\n<ul>\n<li>Extensive experience building and deploying solutions with AI/LLM technologies, including integrating LLMs, applying AI orchestration frameworks (e.g., LangChain, LlamaIndex), prompt engineering techniques, and agentic frameworks</li>\n</ul>\n<ul>\n<li>Deep expertise in data modeling, processing, integration, and analytics, with proficiency in enterprise data platforms (e.g., Salesforce Data Cloud, Snowflake, Databricks, BigQuery)</li>\n</ul>\n<ul>\n<li>Strong collaboration, communication, and presentation skills, both written and verbal, with the ability to explain complex technical concepts to technical and non-technical partners</li>\n</ul>\n<ul>\n<li>Track record of leading technical engagements, mentoring junior team members, and taking responsibility for technical aspects of projects</li>\n</ul>\n<p>This role is eligible to participate in Elastic&#39;s stock program and has a competitive salary range of $94,300-$149,200 USD, with an alternate range of $113,300-$179,200 USD in select locations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ae53b8d4-8fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7722032","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$94,300-$149,200 USD","x-skills-required":["JavaScript","Java","Python","AI/LLM technologies","LangChain","LlamaIndex","prompt engineering techniques","agentic frameworks","data modeling","processing","integration","analytics","Salesforce Data Cloud","Snowflake","Databricks","BigQuery"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:59.873Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JavaScript, Java, Python, AI/LLM technologies, LangChain, LlamaIndex, prompt engineering techniques, agentic frameworks, data modeling, processing, integration, analytics, Salesforce Data Cloud, Snowflake, Databricks, BigQuery","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":94300,"maxValue":149200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e40d534f-76a"},"title":"Resident Architect","description":"<p>About Us</p>\n<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>\n<p>We&#39;re seeking an experienced Resident Architect (RA) with a passion for solving challenging problems with dbt to join our Professional Services team. RAs are billable to dbt Enterprise customers and help achieve our mission to empower data developers to create and disseminate organisational knowledge.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects - inclusive of implementation, troubleshooting configurations, instilling best practices, and solutioning MVPs and long-term solutions to customer-specific requirements</li>\n</ul>\n<ul>\n<li>Consult on architecture and design</li>\n</ul>\n<ul>\n<li>Ensure our most strategic enterprise customers are adopting the product</li>\n</ul>\n<ul>\n<li>Collaborate with other internal customer-facing teams at dbt Labs - Sales, Solution Architects, Training, Support</li>\n</ul>\n<ul>\n<li>Provide critical feedback to dbt Labs product and engineering teams to improve and prioritise customer requests and ensure rapid resolution for engagement-specific issues</li>\n</ul>\n<ul>\n<li>Become a product expert with dbt in the context of the modern data stack (if you aren&#39;t already)</li>\n</ul>\n<p>What You&#39;ll Need</p>\n<ul>\n<li>4+ years&#39; experience working with technical data tooling, even better if it is in a customer-facing post-sales, technical architect or consulting role</li>\n</ul>\n<ul>\n<li>Deep expertise in at least one data platform (Snowflake, Databricks, BigQuery, Redshift)</li>\n</ul>\n<ul>\n<li>Experience using, deploying, or configuring dbt in an enterprise setting - working with dbt for minimum 1 year</li>\n</ul>\n<ul>\n<li>Proficiency in writing SQL and Python in analytics contexts</li>\n</ul>\n<ul>\n<li>You look forward to building skills in technical areas that support deployment and integration of dbt enterprise solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Customer focus, embracing one of core values that users are our best advocates</li>\n</ul>\n<ul>\n<li>Strong organisational skills with the ability to manage multiple technical projects simultaneously - including defining scope, tracking timelines, and ensuring deliverables are met</li>\n</ul>\n<ul>\n<li>Clear and concise communicator with the ability to engage internal and external stakeholders, effectively explain complex technical or organisational challenges, and propose thoughtful, iterative solutions</li>\n</ul>\n<ul>\n<li>The ability to thrive in a remote organisation that highly values transparency and cross-collaboration</li>\n</ul>\n<ul>\n<li>Travel approximately 2-4x/year for customer onsite sessions, team offsites, and company events will be expected</li>\n</ul>\n<p>What Will Make You Stand Out</p>\n<ul>\n<li>You have obtained the dbt Analytics Engineering Certification</li>\n</ul>\n<ul>\n<li>You have the ability to advise on dbt enterprise recommendations, and build direction/consensus with the customer to move forward</li>\n</ul>\n<ul>\n<li>Experience with traditional Enterprise ETL tooling (Informatica, Datastage, Talend)</li>\n</ul>\n<p>Remote Hiring Process</p>\n<ul>\n<li>Interview with a Talent Acquisition Partner</li>\n</ul>\n<ul>\n<li>Hiring Manager Interview</li>\n</ul>\n<ul>\n<li>Technical Task + Presentation</li>\n</ul>\n<ul>\n<li>Team Interview</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Unlimited vacation time with a culture that actively encourages time off</li>\n</ul>\n<ul>\n<li>401k plan with 3% guaranteed company contribution</li>\n</ul>\n<ul>\n<li>Comprehensive healthcare coverage</li>\n</ul>\n<ul>\n<li>Generous paid parental leave</li>\n</ul>\n<ul>\n<li>Flexible stipends for:</li>\n</ul>\n<ul>\n<li>Health &amp; Wellness</li>\n</ul>\n<ul>\n<li>Home Office Setup</li>\n</ul>\n<ul>\n<li>Cell Phone &amp; Internet</li>\n</ul>\n<ul>\n<li>Learning &amp; Development</li>\n</ul>\n<ul>\n<li>Office Space</li>\n</ul>\n<p>Compensation</p>\n<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>\n<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>\n<ul>\n<li>The typical starting salary range for this role is:</li>\n</ul>\n<p>$114,000 - $137,700</p>\n<ul>\n<li>The typical starting salary range for this role in the select locations listed is:</li>\n</ul>\n<p>$126,000 - $153,000</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e40d534f-76a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4627942005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,000 - $137,700","x-skills-required":["dbt","data platform","Snowflake","Databricks","BigQuery","Redshift","SQL","Python","analytics engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:56.862Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"dbt, data platform, Snowflake, Databricks, BigQuery, Redshift, SQL, Python, analytics engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114000,"maxValue":137700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_85f1f87e-70f"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n</ul>\n<ul>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_85f1f87e-70f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461327002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:49:55.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_821d6af4-827"},"title":"Senior Solutions Architect - AI/BI","description":"<p>The Solutions Architect (AI/BI) team executes on Databricks&#39; strategic Product Operating Model to establish product market fit and set the course for rapid revenue growth.</p>\n<p>As a Senior Solutions Architect - AI/BI, you will collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>\n<li>Serving as a trusted advisor and expert Solutions Architect, building technical credibility with stakeholders to drive product adoption and vision.</li>\n<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>\n<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>\n</ul>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n</ul>\n<p>Required skills include:</p>\n<ul>\n<li>Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions in a client or customer environment.</li>\n<li>Ability to advise customers in lakehouse analytics architecture.</li>\n<li>Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers.</li>\n<li>Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL.</li>\n</ul>\n<p>Preferred skills include:</p>\n<ul>\n<li>Experience with Databricks products and services.</li>\n<li>Knowledge of data science and machine learning concepts.</li>\n</ul>\n<p>This is a senior-level role that requires a strong background in data and AI, as well as excellent communication and collaboration skills.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_821d6af4-827","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437301002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud-based Data Visualisation and Analytics Solutions","Lakehouse analytics architecture","Data visualisation and analytics systems","Apache Spark","Python","SQL"],"x-skills-preferred":["Databricks products and services","Data science and machine learning concepts"],"datePosted":"2026-04-18T15:49:47.877Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud-based Data Visualisation and Analytics Solutions, Lakehouse analytics architecture, Data visualisation and analytics systems, Apache Spark, Python, SQL, Databricks products and services, Data science and machine learning concepts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d99dda6c-8c3"},"title":"Senior Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. Our Infrastructure Backend teams span many domains across our essential service platforms.</p>\n<p>As a Senior Software Engineer at Databricks India, you can get to work across various challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>We are looking for a Senior Software Engineer with 7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language. You should also have 4+ years of experience developing large-scale distributed systems from scratch, experience working on a SaaS platform or with Service-Oriented Architectures, and experience working on Infrastructure-related projects.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d99dda6c-8c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7647289002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Apache Spark","Databricks","Kubernetes","Distributed systems","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:23.954Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Apache Spark, Databricks, Kubernetes, Distributed systems, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_760c3e88-e35"},"title":"Senior Product Manager, Data","description":"<p>Job Title: Senior Product Manager, Data</p>\n<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>\n<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>\n<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>\n<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>\n<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>\n<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>\n<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>\n<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>\n<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>\n<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>\n<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>\n<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>\n<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>\n<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>\n<li>Awareness of data security, compliance, and governance best practices</li>\n<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>Salary Range: $143,000 to $210,000</p>\n<p>Benefits:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Workplace:</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_760c3e88-e35","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649824006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["data product management","data architecture","enterprise data engineering","data lakes","data warehouses","ETL/ELT and streaming pipelines","data governance frameworks","modern data stack technologies","Snowflake","BigQuery","Databricks","Apache Spark","Airflow","DBT","Kafka","data modeling","domain-driven design","scalable data platforms","BI and analytics platforms","Tableau","Looker","Power BI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:58.405Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3168d7d3-70b"},"title":"Partner Solutions Architect - North America","description":"<p>About Us</p>\n<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>\n<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>\n<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>\n<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>\n<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>\n<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>\n<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>\n<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>\n<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>\n<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>\n<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>\n<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>\n<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>\n<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>\n<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>\n<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>\n<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>\n<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>\n<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>\n<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>\n<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>\n</ul>\n<p>What will make you stand out</p>\n<ul>\n<li>Experience working directly in partner, alliance, or ecosystem roles</li>\n<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>\n<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>\n<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>\n<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>\n<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>\n<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>\n<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Unlimited vacation (and yes we use it!)</li>\n<li>Pension coverage</li>\n<li>Excellent healthcare</li>\n<li>Paid Parental Leave</li>\n<li>Wellness stipend</li>\n<li>Home office stipend, and more!</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3168d7d3-70b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data modeling","analytics engineering","modern data platforms","Snowflake","Databricks","Google Cloud","partner development","field engineering","sales engineering","consulting","partner engineering"],"x-skills-preferred":["cloud marketplace motions","co-sell programs","partner-sourced pipeline generation","dbt","analytics engineering workflows","transformation","orchestration","governance","metadata"],"datePosted":"2026-04-18T15:48:30.813Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada - Remote; US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc8eb5bc-349"},"title":"Staff Data Analyst","description":"<p>As a Staff Data Analyst at Okta, you will play a key role in creating the foundation for data-based decision-making across the company&#39;s functional business teams. You will work with internal customers to identify ways to effectively leverage data using cutting-edge cloud and big data technologies to drive business insights.</p>\n<p>Your responsibilities will include serving as the definitive SME for Customer First analytics, defining the data models, metrics, and value drivers that steer company strategy. You will also lead the exploration and integration of AI-driven tools to automate workflows and pioneer new methodologies for data discovery.</p>\n<p>Additionally, you will architect the analytics strategy for customer insights, leveraging product telemetry and public data to identify and predict risk and growth signals. You will own the vision for our semantic layer, ensuring it supports advanced modeling, self-service, and high-integrity dashboarding.</p>\n<p>You will participate in high-impact initiatives in predictive modeling (cross-sell, churn, LTV) that directly influence GTM execution. You will partner with leadership and account teams to translate raw insights into a high-impact, action-oriented Command Center, empowering account teams to instantly prioritize and execute on the most urgent opportunities and risks.</p>\n<p>You will also partner with Engineering to co-design data pipelines and transformations that ensure long-term scalability and data quality. You will set the bar for excellence in data storytelling and modeling, mentoring the broader team on best practices and process improvement.</p>\n<p>To succeed in this role, you will need to have a passion for driving decisions and insights through data. You will be detail-oriented, analytical, and able to solve big problems. You will also need to be able to effectively communicate with team members and business partners.</p>\n<p>In terms of qualifications, you will need to have a BS in CS, MIS, or a related technical degree. You will also need to have 7+ years of experience as a Data Analyst/Data Engineer/BI Developer. Advanced SQL experience is also required.</p>\n<p>Preferred qualifications include experience with building reports and visualizations to represent data intuitively in Tableau or similar data visualization tools. Advanced analytics, data science, AI/ML experience and techniques are also a plus.</p>\n<p>Finally, you will need to be able to work cross-functionally and communicate with technical and non-technical teams. Experience with ETL processes, software development, and lifecycle awareness, using Python, Java, Databricks, and Snowflake is also a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc8eb5bc-349","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7792010","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Data Analysis","Data Visualization","Tableau","Python","Java","Databricks","Snowflake"],"x-skills-preferred":["Advanced Analytics","Data Science","AI/ML","ETL Processes","Software Development","Lifecycle Awareness"],"datePosted":"2026-04-18T15:48:00.277Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Data Analysis, Data Visualization, Tableau, Python, Java, Databricks, Snowflake, Advanced Analytics, Data Science, AI/ML, ETL Processes, Software Development, Lifecycle Awareness"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7fdd967e-0e8"},"title":"Principal Event Marketing Operations Manager","description":"<p>As a Principal Event Marketing Operations Manager at Databricks, you&#39;ll collaborate with the corporate event marketing team and partners to drive and support the strategy and execution of our flagship Data + AI Summit Operations.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Create optimized operations models by analyzing past plans and feedback.</li>\n<li>Develop a comprehensive operations strategy roadmap with internal and external partners.</li>\n<li>Manage all program operations communications (project plans, DACI, logistics, briefs, accountability).</li>\n<li>Implement comprehensive internal and external customer service plans.</li>\n<li>Utilize best-in-class technology, materials, and systems to elevate program operations.</li>\n<li>Support the team in exceeding key metrics like registration, attendance, engagement, efficiency, and business impact.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years of event marketing experience, especially in large-scale program operations.</li>\n<li>Strong project and time management, managing multiple priorities within deadlines.</li>\n<li>Excellent written and verbal communication, skilled in leading meetings and discussions.</li>\n<li>Proficient with Google Suite, MS Excel, Mac, and technology literate.</li>\n<li>Exceptional customer service for internal and external stakeholders.</li>\n<li>Highly organized, detail-oriented, proactive, and capable of managing multiple projects simultaneously.</li>\n<li>Metrics-driven, enjoying data analysis from the conference platform to inform future success.</li>\n<li>Experience with Salesforce, Databricks AIBI, Asana, and Rainfocus.</li>\n</ul>\n<p>About Databricks:</p>\n<p>Databricks&#39; mission is to accelerate innovation for its customers by unifying Data Science, Engineering, and Business. Founded by the original creators of Apache Spark™, Databricks provides a Unified Analytics Platform for data science teams to collaborate with data engineering and lines of business to build data products.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7fdd967e-0e8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://www.databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337355002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000-$264,000 USD","x-skills-required":["event marketing","program operations","project management","customer service","data analysis","technology literacy","Google Suite","MS Excel","Mac","Salesforce","Databricks AIBI","Asana","Rainfocus"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:42.284Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"event marketing, program operations, project management, customer service, data analysis, technology literacy, Google Suite, MS Excel, Mac, Salesforce, Databricks AIBI, Asana, Rainfocus","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":264000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d57b93e-423"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d57b93e-423","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8456948002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","data architecture","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:47:22.867Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, data architecture, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e7613e05-073"},"title":"Customer Enablement Specialist","description":"<p>Job Title: Customer Enablement Specialist</p>\n<p>Location: Bellevue, Washington</p>\n<p>Department: Education &amp; Training</p>\n<p>CSQ227R234</p>\n<p><strong>About the Role</strong></p>\n<p>This role is required to work in a hybrid office setting in our Bellevue, WA office.</p>\n<p><strong>The Opportunity</strong></p>\n<p>Databricks runs some of the largest customer enablement programs in the industry , workshops, digital courses, labs, and webinars that reach thousands of users. The Customer Enablement Specialist turns that reach into results. You connect engaged learners to structured training plans that drive product adoption, customer success, and measurable business impact.</p>\n<p>This isn’t a sales or business development role , every conversation begins with an existing Databricks user or program participant. Your focus is on helping those customers move from initial interest to tangible capability: skilled teams, completed training milestones, and activated use cases.</p>\n<p>You’ll manage a broad portfolio of accounts, supporting new and emerging personas , business users, analysts, and app developers , and helping them succeed with Databricks’ latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Convert participation in Databricks’ scale programs (webinars, workshops, digital learning) into structured training engagements.</li>\n</ul>\n<ul>\n<li>Own a high-volume enablement pipeline , identifying learner needs, recommending tailored paths, and tracking adoption progress.</li>\n</ul>\n<ul>\n<li>Deliver engaging L100–L200 sessions and demos to help new personas understand what’s possible with Databricks.</li>\n</ul>\n<ul>\n<li>Build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</li>\n</ul>\n<ul>\n<li>Partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles.</li>\n</ul>\n<ul>\n<li>Report key metrics , trained accounts, learner growth, conversion rates, and training revenue , using data to guide your priorities.</li>\n</ul>\n<ul>\n<li>Provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</li>\n</ul>\n<p><strong>What You Bring</strong></p>\n<ul>\n<li>2–4 years in a technical, customer-facing role , technical training, pre-sales, enablement, or customer success preferred.</li>\n</ul>\n<ul>\n<li>Hands-on familiarity with modern data and analytics platforms (Databricks, cloud SQL, BI tools, or data lakes).</li>\n</ul>\n<ul>\n<li>Confidence delivering introductory technical content to non-expert audiences.</li>\n</ul>\n<ul>\n<li>Working knowledge of AI/ML concepts , able to explain how Databricks enables practical use cases.</li>\n</ul>\n<ul>\n<li>Strong communication skills and a consultative approach: discover needs, recommend paths, and gain commitment.</li>\n</ul>\n<ul>\n<li>A data-driven mindset with strong organisational habits and comfort managing many concurrent accounts.</li>\n</ul>\n<ul>\n<li>Team-first attitude , proactive collaborator who knows when to escalate for deeper technical support.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Databricks certifications or willingness to certify (Data Engineer Associate, Databricks certifications (or willingness to obtain within 6 months).</li>\n</ul>\n<ul>\n<li>Background in SaaS, cloud, or data platforms; familiarity with BI or AI/BI tools (Databricks Genie, Tableau, Power BI).</li>\n</ul>\n<ul>\n<li>Exposure to Databricks Apps, REST APIs, or AI agent concepts.</li>\n</ul>\n<ul>\n<li>Experience in a role with enablement or training-related revenue metrics.</li>\n</ul>\n<p><strong>Why This Role, Why Now</strong></p>\n<p>New products create new skill gaps. As Databricks expands into AI/BI, Databricks Apps, and agent-based development, a new wave of users , business analysts, app builders, domain experts , needs to get skilled up quickly. The depth CEA team focuses on the complex, strategic, and deeply technical. This role focuses on the broad middle: high volume, new personas, and the scale-to-commitment motion that turns digital participation into real adoption. It is a high-visibility, high-impact position with a clear growth path into senior CEA work as you build depth and track record.</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 2 Pay Range $86,600-$119,150 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e7613e05-073","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8431935002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$86,600-$119,150 USD","x-skills-required":["data and analytics platforms","cloud SQL","BI tools","data lakes","AI/ML concepts","Databricks Apps","REST APIs","AI agent concepts"],"x-skills-preferred":["Databricks certifications","SaaS","cloud","data platforms","BI or AI/BI tools"],"datePosted":"2026-04-18T15:46:34.416Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data and analytics platforms, cloud SQL, BI tools, data lakes, AI/ML concepts, Databricks Apps, REST APIs, AI agent concepts, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":86600,"maxValue":119150,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d129b425-6b1"},"title":"Project Manager, Manufacturing Engineering","description":"<p>The Manufacturing Team at Anduril is responsible for rapidly iterating and building cutting-edge defence hardware. As a Project Manager, Manufacturing Engineering, you will own large parts of the end-to-end manufacturing KPI tracking, enhancing transparency and generating actionable reports to support strategic and tactical decision making.</p>\n<p>In this role, you will support increasing efficiencies throughout our end-to-end processes, all with a focus on introducing scale and resiliency into our growing people and tools footprint. If you thrive in fast-paced, technical environments and excel at juggling multiple responsibilities, we want to hear from you.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Program Oversight and Support: Support program managers and manufacturing leadership in preparation and management of critical technical and business forums. Provide structure to ambiguity and help drive manufacturing engineering stakeholders towards decisions on tactical and strategic issues. Develop and execute strategies to drive business impact and outcomes.</li>\n</ul>\n<ul>\n<li>Data Analysis and Reporting: Execute scrappy analyses on short timelines to get answers quickly, while also considering more scalable solutions that will make Anduril successful in the long term. Analyze manufacturing data to identify trends, patterns, and areas for improvement in manufacturing processes. Facilitate data-driven decision-making by providing timely and accurate manufacturing information.</li>\n</ul>\n<ul>\n<li>Systems and Processes Improvement: Develop an understanding of how our manufacturing engineering systems (ERP, MRP, PLM etc.) function and how to leverage data from them to equip manufacturing engineering teams with insights and tools to make tactical and strategic decisions. Analyze best practices across teams and develop ways to standardize and implement across all teams. Assist in the documentation and standardization of business procedures and reporting protocols.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in engineering, data science, business, or related field.</li>\n<li>5+ years of experience in a manufacturing environment, with a focus on data analysis, program support, or process improvement.</li>\n<li>Strong analytical aptitude and ability to intuitively think about problems in terms of numbers and data models.</li>\n<li>Strong written and verbal communication abilities, with the capacity to present complex data clearly to diverse audiences.</li>\n<li>Reliable self-starter, highly organized, resourceful, with attention to detail.</li>\n<li>Must be a U.S. Person due to required access to U.S. export-controlled information or facilities.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience working in a manufacturing environment.</li>\n<li>Familiarity with MES, ERP, and PLM systems and how they interface with each other and with operations.</li>\n<li>Familiarity with Confluence/JIRA, SharePoint, and Airtable.</li>\n<li>Familiarity with SQL, Typescript, PowerBI, Tableau, Foundry, Databricks.</li>\n</ul>\n<p>Salary Range: $113,000-$149,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d129b425-6b1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5082298007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$113,000-$149,000 USD","x-skills-required":["data analysis","program support","process improvement","manufacturing engineering","ERP","MRP","PLM","MES","SQL","Typescript","PowerBI","Tableau","Foundry","Databricks"],"x-skills-preferred":["Confluence/JIRA","SharePoint","Airtable"],"datePosted":"2026-04-18T15:46:19.936Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"data analysis, program support, process improvement, manufacturing engineering, ERP, MRP, PLM, MES, SQL, Typescript, PowerBI, Tableau, Foundry, Databricks, Confluence/JIRA, SharePoint, Airtable","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":113000,"maxValue":149000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbd81d47-d7e"},"title":"Data Platform Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. This position may be offered as Senior Solutions Consultant, Resident Solutions Architect, or Senior Resident Solutions Architect. The final title will align to your experience, technical depth, and customer-facing ownership.</p>\n<p>As a Big Data Solutions Architect (Internal Title - Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>[Preferred] Databricks Certification but not essential</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbd81d47-d7e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8486738002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:46:17.349Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_38f38a04-ff3"},"title":"Strategic Enterprise Account Executive, Healthcare","description":"<p>As a Strategic Enterprise Account Executive at Databricks, you will be responsible for maintaining and growing a single existing account in Chicago, IL. This role requires a strategic sales professional with experience in selling into a single Large Healthcare account. You will need to understand how to sell innovation and change through customer vision expansion and guide deals forward to compress decision cycles.</p>\n<p>Key responsibilities include: Meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners Closing both new accounts and existing accounts Identifying and closing quick, small wins while managing longer, complex sales cycles Exceeding activity, pipeline, and revenue targets Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce Using a solution-based approach to selling and creating value for customers Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark™ Ensuring 100% satisfaction among all customers Prioritising opportunities and applying appropriate resources Building a plan for success internally at Databricks and externally with your accounts</p>\n<p>We are looking for someone with: Previous experience in an early stage company and knowledge of how to navigate and be successful Field sales experience within big data, Cloud, or SaaS sales Experience managing large, complex Healthcare accounts is preferred Prior customer relationships with CIOs, program managers, and essential decision makers The ability to simply articulate intricate cloud technologies 7+ years experience exceeding sales quotas Success closing new accounts while working existing accounts Understanding of Spark and big data preferable Passion for cloud technologies Bachelor&#39;s Degree</p>\n<p>The pay range for this role is $272,000-$374,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_38f38a04-ff3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8417773002","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$272,000-$374,000 USD","x-skills-required":["field sales experience","big data","Cloud","SaaS sales","Spark","customer relationships","sales quotas","cloud technologies"],"x-skills-preferred":["Apache Spark","Databricks Data Intelligence Platform","data analytics","AI"],"datePosted":"2026-04-18T15:45:54.562Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago, Illinois; Remote - Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"field sales experience, big data, Cloud, SaaS sales, Spark, customer relationships, sales quotas, cloud technologies, Apache Spark, Databricks Data Intelligence Platform, data analytics, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":272000,"maxValue":374000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80b94e35-0f3"},"title":"Staff Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Staff Technical Solutions Engineer with over 12+ years of experience to join our Platform Support team. This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 12 years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimisation.</li>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80b94e35-0f3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7845334002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:45:36.598Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d94d7ea-9ca"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n</ul>\n<ul>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d94d7ea-9ca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461330002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","design and deployment of highly performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:45:27.183Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf949b3-c66"},"title":"Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer","description":"<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>\n<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>\n<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>\n<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>\n<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>\n<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>\n<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>\n<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>\n<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>\n<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>\n<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>\n<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>\n<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>\n<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>\n<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>\n<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>\n<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>\n<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>\n<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>\n<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>\n<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>\n<li>Advise executive leadership on security architecture, risks, and mitigation.</li>\n<li>Mentor security engineers and developers on secure design and best practices.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>\n<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>\n<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>\n<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>\n<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>\n<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>\n<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>\n<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>\n<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>\n<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>\n<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>\n<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>\n<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>\n<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>\n<li>Ability to influence stakeholders and drive alignment.</li>\n<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf949b3-c66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8207910002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Security Architecture","Identity and Access Management","Zero Trust","Endpoint Security","Data Encryption","Data Exfiltration Prevention","Cluster Security","Offensive Security","Non-human Identity Management","IAM Best Practices","Data Loss Prevention","SaaS Proxy Design and Implementation","Cloud Infrastructure Best Practices","Least Privilege Access for Data Security","Guide internal IT on Databricks’ security and compliance certifications","Support incident response, vulnerability management, threat modeling, and red teaming","Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs","Advise executive leadership on security architecture, risks, and mitigation","Mentor security engineers and developers on secure design and best practices"],"x-skills-preferred":["Terraform","Python","Cursor","Lambda","AWS cloud security","Network security","Data Protection","Appsec","SIEM platforms","XDR","cloud-native threat detection tools","Web application security","OWASP","API security","Secure design and testing","AI-assisted development","Security automation","Scripting/IaC tools","CISSP","CCSP","CEH","AWS Certified Security – Specialty","AWS Certified Solutions Architect – Professional","AWS Certified Advanced Networking – Specialty"],"datePosted":"2026-04-18T15:45:19.828Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f18e7306-00c"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark and knowledge of Apache Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f18e7306-00c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461325002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","Databricks","CI/CD","MLOps","technical project delivery","documentation","white-boarding","client management","conflict management","scalable streaming","batch solutions","cloud-native components"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:17.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Philadelphia, Pennsylvania"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, Databricks, CI/CD, MLOps, technical project delivery, documentation, white-boarding, client management, conflict management, scalable streaming, batch solutions, cloud-native components","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f05ab370-fbb"},"title":"Staff Product Operations Manager","description":"<p>We are seeking a Staff Product Operations Manager to act as the connective tissue between Support, Product, Engineering, and Analytics to drive operational alignment, product readiness, and continuous improvement across support workflows.</p>\n<p>Key responsibilities include leading and executing strategic initiatives to scale global Support operations, translating operational pain points into product and tooling requirements, building and maintaining dashboards that measure support effectiveness, and influencing senior stakeholders by turning support and operational insights into clear, data-driven narratives.</p>\n<p>The ideal candidate will have 7+ years of experience in Operations, Consulting, or Strategy roles, ideally at a SaaS company or large tech company, and a proven track record driving cross-functional initiatives and collaborating across different organizations.</p>\n<p>Nice to have skills include familiarity with Databricks tools, experience working on Support Operations for SaaS-based companies, and familiarity with tools like Salesforce, Zendesk, Jira, Looker/Tableau, and operational workflows.</p>\n<p>The pay range for this role is $140,000-$192,500 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f05ab370-fbb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8236022002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$140,000-$192,500 USD","x-skills-required":["SQL","data analysis","dashboard building","stakeholder management","cross-functional collaboration"],"x-skills-preferred":["Databricks tools","Salesforce","Zendesk","Jira","Looker/Tableau"],"datePosted":"2026-04-18T15:44:37.522Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"SQL, data analysis, dashboard building, stakeholder management, cross-functional collaboration, Databricks tools, Salesforce, Zendesk, Jira, Looker/Tableau","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":192500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_43ca4523-d1d"},"title":"Customer Enablement Specialist","description":"<p>As a Customer Enablement Specialist at Databricks, you will play a critical role in helping customers succeed with our data and AI platform. You will be responsible for managing a broad portfolio of accounts, supporting new and emerging personas, and helping them succeed with Databricks&#39; latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>\n<p>Your main objective will be to convert participation in Databricks&#39; scale programs (webinars, workshops, digital learning) into structured training engagements. You will own a high-volume enablement pipeline, identifying learner needs, recommending tailored paths, and tracking adoption progress.</p>\n<p>To achieve this, you will deliver engaging L100–L200 sessions and demos to help new personas understand what&#39;s possible with Databricks. You will build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</p>\n<p>You will partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles. You will report key metrics – trained accounts, learner growth, conversion rates, and training revenue – using data to guide your priorities.</p>\n<p>In addition, you will provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</p>\n<p>We are looking for a highly motivated and organized individual with excellent communication skills and a consultative approach. You should have hands-on familiarity with modern data and analytics platforms, confidence delivering introductory technical content to non-expert audiences, and a working knowledge of AI/ML concepts.</p>\n<p>If you are passionate about helping customers succeed and have a strong desire to learn and grow with our company, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_43ca4523-d1d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8431927002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$86,600-$119,150 USD","x-skills-required":["modern data and analytics platforms","customer-facing role","technical training","pre-sales","enablement","customer success","AI/BI","Databricks Apps","agent-based development"],"x-skills-preferred":["Databricks certifications","SaaS","cloud","data platforms","BI or AI/BI tools","Databricks Genie","Tableau","Power BI"],"datePosted":"2026-04-18T15:44:34.379Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"modern data and analytics platforms, customer-facing role, technical training, pre-sales, enablement, customer success, AI/BI, Databricks Apps, agent-based development, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools, Databricks Genie, Tableau, Power BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":86600,"maxValue":119150,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ed16888c-52f"},"title":"Senior Product Operation Analyst","description":"<p>At Databricks, we are seeking a Senior Product Operation Analyst to lead key projects and collaborate cross-functionally with stakeholders in Engineering, Data Science, HR, and Finance to ensure our systems deliver operational excellence while scaling with the company.</p>\n<p>Reporting to the Director, Product Operations, you will lead key projects, orchestrate jobs using Databricks Jobs and Lakeflow Declarative Pipelines, establish coding, data management, and documentation standards and best practices, and act as the connective tissue between Support, Product, Engineering, and Analytics to drive operational alignment, product readiness, and continuous improvement across support workflows.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build reports and dashboards for monthly, quarterly, and executive-level reporting</li>\n<li>Orchestrate jobs using Databricks Jobs and Lakeflow Declarative Pipelines with built-in data validation and reconciliations</li>\n<li>Establish coding, data management, and documentation standards and best practices</li>\n<li>Act as the connective tissue between Support, Product, Engineering, and Analytics to drive operational alignment, product readiness, and continuous improvement across support workflows</li>\n<li>Lead and execute strategic initiatives to scale global Support operations, including coverage models, KPI frameworks, and support process enhancements.</li>\n</ul>\n<ul>\n<li>Build and maintain dashboards that measure support effectiveness, surface product-driven case trends, and track customer experience across support channels.</li>\n</ul>\n<ul>\n<li>Support quarterly and annual planning cycles, including headcount, capacity modeling, and budget alignment in partnership with Finance and Workforce Management.</li>\n</ul>\n<ul>\n<li>Influence senior stakeholders by turning support and operational insights into clear, data-driven narratives that inform product and business decisions.</li>\n</ul>\n<p>What We Look For:</p>\n<ul>\n<li>5+ years of experience in data and analytics, ideally at a SaaS company or large tech company</li>\n<li>Comfort navigating large datasets using SQL-based tools and delivering dashboards and BI visualizations</li>\n<li>Ability to translate complex data into actionable insights, the communicate and action those insights</li>\n<li>Proven track record driving cross-functional initiatives and collaborating across different organizations such as Product, Engineering, Data Science, GTM, and Support teams</li>\n<li>Excellent communication and stakeholder management skills, with an ability to influence without authority</li>\n<li>Strong process-orientation and systems thinker with bias for action</li>\n<li>Data-driven decision-making</li>\n<li>Experience working in a high-growth, fast-paced environment and managing multiple priorities</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Familiarity with Databricks tools, especially for extract/transform/load functions</li>\n<li>Experience working on Support Operations for SaaS based companies.</li>\n<li>Familiarity with tools like Salesforce, Zendesk, Jira, Looker/Tableau, and operational workflows</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Local Pay Range $130,400-$182,600 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ed16888c-52f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8298108002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$130,400-$182,600 USD","x-skills-required":["data and analytics","SQL-based tools","dashboards and BI visualizations","cross-functional initiatives","communication and stakeholder management","process-orientation and systems thinking","data-driven decision-making"],"x-skills-preferred":["Databricks tools","extract/transform/load functions","Salesforce","Zendesk","Jira","Looker/Tableau","operational workflows"],"datePosted":"2026-04-18T15:44:26.038Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data and analytics, SQL-based tools, dashboards and BI visualizations, cross-functional initiatives, communication and stakeholder management, process-orientation and systems thinking, data-driven decision-making, Databricks tools, extract/transform/load functions, Salesforce, Zendesk, Jira, Looker/Tableau, operational workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130400,"maxValue":182600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_573fecd4-5b1"},"title":"Staff Software Engineer - Money Team","description":"<p>At Databricks, we are obsessed with Data + AI to solve the world&#39;s toughest problems. We build and run the world&#39;s best data and AI infrastructure platform, so our customers can focus on high-value challenges. The Money team&#39;s mission is to maximize the value that our customers derive from their investments in data projects. We accomplish this through innovative commercialization strategies and cutting-edge engineering.</p>\n<p>Our team ensures timely, accurate, and customizable billing and usage data, alongside budgeting, forecasting, and cost optimization tools. We provide a seamless and consistent billing experience for all our customers, whether they are large enterprises or individual developers, across different pricing plans and cloud providers (AWS, GCP &amp; Azure).</p>\n<p>As a software engineer on the Money team, you will be closely involved in the entire billing process, including usage ingestion, metering, pricing, credits, promotions, payments, usage reporting, and cost center and budgeting. Your role is crucial in democratizing data by bringing Databricks products to market.</p>\n<p>By collaborating with marketing, product teams, commercialization experts, data scientists, IT, and customer support, you will standardize billing experiences across major cloud providers, offering our customers a unified &#39;sky computation&#39; experience. This role involves utilizing the latest Databricks products and tools within the ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Design and manage the Money systems and services, commercializing all Databricks products and offerings.</li>\n<li>Develop innovative primitives that enable and support various pricing strategies such as Pay-As-You-Go, commissions, credits, trials, and promotions.</li>\n<li>Enhance engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>\n<li>Collaborate with commercialization experts to develop and implement innovative pricing strategies and plans.</li>\n<li>Use AI and LLMs to innovate in cost insight, prediction, and optimization across various cloud providers.</li>\n<li>You will become an expert at using the Databricks Data + AI tools</li>\n<li>Provide leadership in long-term vision and requirements development for Databricks products, in partnership with our engineering teams.</li>\n<li>Represent Databricks at academic and industrial conferences &amp; events.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS or higher degree in Computer Science or a related field.</li>\n<li>Technical leadership experience in large projects similar to those described, including near real-time large data processing and distributed service infrastructure management.</li>\n<li>Proven track record of building, shipping, and managing reliable, distributed services and data pipelines at scale.</li>\n<li>Demonstrated leadership skills and the ability to lead across functional and organizational boundaries.</li>\n<li>A proactive approach and a passion for delivering high-quality solutions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_573fecd4-5b1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7111068002","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Apache Spark","Databricks Data + AI tools","Cloud providers (AWS, GCP & Azure)","CI/CD processes","Test frameworks","Data quality assurance","End-to-end reconciliation","Anomaly detection"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:00.728Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Databricks Data + AI tools, Cloud providers (AWS, GCP & Azure), CI/CD processes, Test frameworks, Data quality assurance, End-to-end reconciliation, Anomaly detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c534869a-b22"},"title":"Senior Manager GTM Strategy & Operations (Partners)","description":"<p>The Senior Manager, Partner Strategy &amp; Operations defines the Partner Ecosystem and GTM strategies for SI, Cloud, and ISV channels. This role provides the strategic analysis and operational rigor necessary to scale a high-growth GTM business.</p>\n<p>You will partner across Partnerships, Sales, Finance, and Data teams to translate partner performance into actionable insights, resource investments, and long-range planning models.</p>\n<p>This role reports to the Director of Partner Strategy &amp; Operations.</p>\n<p><strong>What You’ll Do</strong></p>\n<p>Combine your strategy and operations toolkit with deep data proficiency to lead major partner initiatives including:</p>\n<ul>\n<li>Define the Partner GTM Strategy - set strategic choices for the SI, Cloud, and ISV ecosystems; establish investment frameworks to ensure resources are deployed to the highest-impact areas.</li>\n</ul>\n<ul>\n<li>Trusted Advisor to Partner Leadership - serve as a strategic partner to Partnerships Leadership by defining, tracking, and implementing goals, programs, and strategies; act as the bridge between executive vision and tactical execution.</li>\n</ul>\n<ul>\n<li>Orchestrate Annual &amp; Long-Range Planning - drive the end-to-end planning process for the partner organization, including revenue modeling, investment ROI analysis, headcount forecasting, and capacity setting.</li>\n</ul>\n<ul>\n<li>Lead Executive Synthesis &amp; AI Insights - develop and deliver high-stakes artifacts including Quarterly Business Reviews (QBRs), investment business cases and strategic narratives for Partnerships leadership using a mix of traditional BI and emerging AI tools to provide deeper signal on ecosystem health</li>\n</ul>\n<ul>\n<li>Instrument the Data Foundation - build and oversee the reporting infrastructure; develop dashboards and key performance indicators (KPIs) to track the health of the business and partner productivity.</li>\n</ul>\n<ul>\n<li>Drive Performance Visibility - manage operational cadences and business reviews, providing clear signals on ecosystem health.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>7+ years in Strategy &amp; Operations, Management Consulting, FP&amp;A, or Business Operations; experience in Enterprise/Mid-Market SaaS preferred.</li>\n</ul>\n<ul>\n<li>Executive Presence &amp; Influence: Proven ability to act as a trusted advisor to senior leadership; comfortable framing complex trade-offs and navigating ambiguity to drive consensus among VPs and cross-functional heads.</li>\n</ul>\n<ul>\n<li>Strategic Storyteller: Exceptional ability to synthesize complex data sets into a &#39;so-what&#39; narrative; highly skilled in creating compelling executive-level presentations and board materials that lead to clear decisions.</li>\n</ul>\n<ul>\n<li>Data &amp; AI Expert: Advanced proficiency in querying and scoping (SQL, Databricks, BigQuery) and an active interest/proficiency in using AI tools to automate GTM operations and insights.</li>\n</ul>\n<ul>\n<li>Process Architect: Ability to envision E2E process changes, document requirements, and guide execution in partnership with technical teams to solve complex organizational needs.</li>\n</ul>\n<ul>\n<li>GTM Tooling Proficiency: Direct experience with BI and sales tools including Salesforce, Tableau, and automated reporting repositories.</li>\n</ul>\n<ul>\n<li>Stakeholder Management: A track record of building deep, collaborative relationships across global GTM teams (Finance, Sales, Marketing) to ensure the partner strategy is seamlessly integrated.</li>\n</ul>\n<p><strong>Success Metrics</strong></p>\n<ul>\n<li>Strategic Alignment: Clearly defined partner goals and programs that align with broader GTM objectives and executive priorities.</li>\n</ul>\n<ul>\n<li>Planning Accuracy: Signal quality of HC forecasting, capacity assumptions, and long-range modeling variance.</li>\n</ul>\n<ul>\n<li>Leadership Trust: Degree of influence on partner investment decisions and the adoption of recommended GTM pivots.</li>\n</ul>\n<ul>\n<li>Ecosystem Productivity: Visibility into and optimization of partner-driven ROI, revenue contribution, and partner efficiency.</li>\n</ul>\n<p><strong>How You Operate</strong></p>\n<ul>\n<li>Data-Driven Rigor: Using high-fidelity data and SQL-driven insights to inform every strategic choice.</li>\n</ul>\n<ul>\n<li>High-Stakes Communication: Moving beyond the &#39;what&#39; of the data to the &#39;choices&#39; for leadership; providing crisp, decision-oriented synthesis.</li>\n</ul>\n<ul>\n<li>Strategic Ownership: Taking a first-principles approach to the partner ecosystem, identifying where to play and how to win.</li>\n</ul>\n<ul>\n<li>Collaborative Partnership: A &#39;service-first&#39; mindset, bringing all possible expertise to bear on projects through an expansive internal network.</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $191,000-$262,550 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c534869a-b22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8450207002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191,000-$262,550 USD","x-skills-required":["Strategy & Operations","Management Consulting","FP&A","Business Operations","Enterprise/Mid-Market SaaS","SQL","Databricks","BigQuery","AI tools","GTM operations","insights","process architecture","E2E process changes","document requirements","execution","technical teams","complex organizational needs","BI and sales tools","Salesforce","Tableau","automated reporting repositories","stakeholder management","global GTM teams","Finance","Sales","Marketing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:00.616Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"USCA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strategy & Operations, Management Consulting, FP&A, Business Operations, Enterprise/Mid-Market SaaS, SQL, Databricks, BigQuery, AI tools, GTM operations, insights, process architecture, E2E process changes, document requirements, execution, technical teams, complex organizational needs, BI and sales tools, Salesforce, Tableau, automated reporting repositories, stakeholder management, global GTM teams, Finance, Sales, Marketing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":262550,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_436a8aa2-6cf"},"title":"Resident Solutions Architect - Digital Native Business","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements solving their big data problems using Apache Spark and the Databricks platform. You will be in a customer-facing role that requires deep hands-on expertise in Apache Spark and data engineering, along with a variety of knowledge of the big data ecosystem.</p>\n<p>You will guide our largest customers, implementing pipelines from data engineering through model building and deployment, plus other technical tasks to help customers to get value out of their data with Databricks.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Guide customers as they implement transformational big data projects, including end-to-end development and deployment of industry-leading big data and AI applications</li>\n<li>Assure that Databricks best practices are being used within all projects and that our quality of service and implementation is strictly followed</li>\n<li>Facilitate technical workshops, discovery and design sessions, customer requirements gathering and scoping for new and existing strategic customers</li>\n<li>Assist the Professional Services leader and project managers with level of effort estimation and mitigation of risk within customer proposals and statements of work</li>\n<li>Architect, design, develop, deploy, operationalize and document complex customer engagements individually or as part of an extended team as the technical lead and overall authority</li>\n<li>Knowledge transfer, enablement and mentoring of other team members, customers and partners, including developing reusable project artifacts</li>\n<li>Provide experience to the consulting team, provide best practices in client engagement to other teams</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP)</li>\n<li>Comfort with object-oriented and functional programming in Scala and Python</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Strong knowledge of distributed computing with Apache Spark</li>\n<li>Travel to customers 20% of the time</li>\n<li>Nice to have: Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_436a8aa2-6cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8488399002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Databricks","Data Engineering","Data Science","Cloud Computing","Big Data","AI","Machine Learning"],"x-skills-preferred":["Scala","Python","Java","C++","Cloud-Native Components"],"datePosted":"2026-04-18T15:43:52.936Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States; Southeast - United States; West Coast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Databricks, Data Engineering, Data Science, Cloud Computing, Big Data, AI, Machine Learning, Scala, Python, Java, C++, Cloud-Native Components"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8664b981-66c"},"title":"Data Platform Solutions Architect (Professional Services) - Emerging Enterprise & DNB","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. Depending on experience and scope, this position may be offered as a Senior Solutions Consultant or a Resident Solutions Architect. You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.</p>\n<p>As a Data Platform Solutions Architect on our Professional Services team for the Emerging Enterprise &amp; Digital Natives business in EMEA, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Drive high-impact customer projects: Design and build reference architectures, implement production use cases, and create “how-to” guides tailored to the unique needs of fast-moving Emerging Enterprise &amp; Digital Native customers in EMEA.</li>\n</ul>\n<ul>\n<li>Collaborate on project scoping: Work closely with Engagement Managers and customers to define project scope, schedules, and deliverables for professional services engagements.</li>\n</ul>\n<ul>\n<li>Enable transformational initiatives: Guide strategic customers through their end-to-end big data journeys,migrating from legacy platforms and deploying industry-leading data and AI applications on the Databricks platform.</li>\n</ul>\n<ul>\n<li>Consult on architecture &amp; design: Provide thought leadership on solution design and implementation strategies, ensuring customers can successfully evaluate and adopt Databricks.</li>\n</ul>\n<ul>\n<li>Offer advanced support: Serve as an escalation point for operational issues, collaborating with Databricks Support and Engineering to resolve challenges quickly.</li>\n</ul>\n<ul>\n<li>Align technical delivery: Partner with cross-functional Databricks teams (Technical, PM, Architecture, and Customer Success) to align on milestones, ensuring customer needs and deadlines are met.</li>\n</ul>\n<ul>\n<li>Amplify product feedback: Provide implementation insights to Databricks Product and Support teams, guiding rapid improvements in features and troubleshooting for customers.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<ul>\n<li>[Preferred] Databricks Certification but not essential</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8664b981-66c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8439047002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:43:52.925Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_542096f5-82b"},"title":"Business Intelligence Manager","description":"<p>As a Business Intelligence Manager, you will play a critical role in building secure, interactive data and AI applications hosted natively on the Databricks platform. You will design, build, and maintain scalable data web applications, AI chatbots, and custom operational interfaces using frameworks like Streamlit, React, and FastAPI. By leveraging Databricks Apps&#39; serverless infrastructure, you will eliminate the need for external hosting and empower business users to make informed decisions by bridging the gap between raw data and solutions using your engineering prowess, Databricks apps, Databricks SQL, Lakebase and AgentBricks.</p>\n<p>The Impact You Will Have:</p>\n<ul>\n<li>Build: You will design and develop robust frontend interfaces and API backends (e.g., FastAPI routing user queries to model-serving endpoints). You will build solutions ranging from data-rich dashboards to enterprise chat solutions powered by the Mosaic AI Agent Framework.</li>\n</ul>\n<ul>\n<li>Architect: You will design secure and scalable application architectures that can suffice GTM requirements on building custom SaaS applications.</li>\n</ul>\n<ul>\n<li>Scale: You will create scalable applications that seamlessly connect to Databricks SQL via the Statement Execution API or Databricks SDK. You will establish CI/CD pipelines using Declarative Automation Bundles (DABs) to automate deployment across development, staging, and production workspaces.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>You have 5+ years of experience working as a Software Engineer, Data App Developer, or Full-Stack Engineer building interactive web applications.</li>\n</ul>\n<ul>\n<li>You are proficient in Python, DBSQL and/or Node.js. Experience with frameworks like Streamlit, Dash, Flask, FastAPI, React, or Express is required.</li>\n</ul>\n<ul>\n<li>You know the Databricks ecosystem. Familiarity with Unity Catalog, Databricks SQL, Databricks SDK for Python, and Model Serving is highly preferred.</li>\n</ul>\n<ul>\n<li>You have built for scale and security. Experience with CI/CD tools, Infrastructure as Code (specifically Databricks Asset Bundles), and implementing secure OAuth flows.</li>\n</ul>\n<ul>\n<li>You are passionate about applying AI. Experience integrating LLMs or Mosaic AI Agent Frameworks into application backends to deliver intelligent chat and RAG solutions.</li>\n</ul>\n<ul>\n<li>You excel in a collaborative environment. You can translate stakeholder requirements into intuitive user interfaces, working through dependencies and troubleshooting deployment errors or telemetry logs.</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_542096f5-82b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8501030002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$158,200-$217,450 USD","x-skills-required":["Python","DBSQL","Node.js","Streamlit","React","FastAPI","Unity Catalog","Databricks SQL","Databricks SDK for Python","Model Serving"],"x-skills-preferred":["CI/CD tools","Infrastructure as Code","OAuth flows","LLMs","Mosaic AI Agent Frameworks"],"datePosted":"2026-04-18T15:43:21.206Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, DBSQL, Node.js, Streamlit, React, FastAPI, Unity Catalog, Databricks SQL, Databricks SDK for Python, Model Serving, CI/CD tools, Infrastructure as Code, OAuth flows, LLMs, Mosaic AI Agent Frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":158200,"maxValue":217450,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9ce3bb01-4a1"},"title":"Scale Solutions Engineer","description":"<p>At Databricks, we aim to empower our customers to solve the world&#39;s most challenging data problems using the Data Intelligence platform. As a Scale Solution Engineer, you will be critical in advising customers during their onboarding. You will work directly with customers to help them onboard and deploy Databricks in their production environment and accelerate Databricks features adoption.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Ensure new customers have an excellent experience by providing technical assistance early in their journey</li>\n<li>Become an expert on the Databricks Platform and guide customers in making the best technical decisions</li>\n<li>Work directly with multiple customers concurrently to provide technical solutions</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>\n<li>3+ years experience in a customer-facing technical role in pre-sales, professional services, consulting or customer success</li>\n<li>Experience in one or more of the following:</li>\n</ul>\n<ul>\n<li>Solid understanding of the end-to-end data analytics workflow</li>\n<li>Excellent time management and prioritization skills</li>\n<li>Knowledge of public cloud platforms AWS, Azure or GCP would be a plus</li>\n<li>Knowledge of a programming language - Python, Scala, or SQL</li>\n<li>Knowledge of end-to-end data analytics workflow</li>\n<li>Hands-on professional or academic experience in one or more of the following:</li>\n</ul>\n<ul>\n<li>Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow)</li>\n<li>Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>\n<li>Excellent written and verbal communication, in English and Portuguese</li>\n<li>Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models).</li>\n<li>Databricks certification(s)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9ce3bb01-4a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8391865002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks","Data Engineering","Data Warehousing","Python","Scala","SQL","AWS","Azure","GCP","ETL","DBT","Spark","Airflow","Redshift","Snowflake","English","Portuguese"],"x-skills-preferred":["Data Science","Machine Learning"],"datePosted":"2026-04-18T15:43:14.531Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks, Data Engineering, Data Warehousing, Python, Scala, SQL, AWS, Azure, GCP, ETL, DBT, Spark, Airflow, Redshift, Snowflake, English, Portuguese, Data Science, Machine Learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7431c7c-8d7"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>We are seeking an AI Engineer - FDE (Forward Deployed Engineer) to join our team. As a member of our highly specialized customer-facing AI team, you will deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. You will work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI Research to solve customer problems.</li>\n<li>Own production rollouts of consumer and internally facing GenAI applications.</li>\n<li>Serve as a trusted technical advisor to customers across a variety of domains.</li>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally.</li>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy.</li>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations.</li>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools (i.e., pandas, scikit-learn, PyTorch, etc.).</li>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP.</li>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience.</li>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike.</li>\n<li>Passion for collaboration, life-long learning, and driving business value through AI.</li>\n<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7431c7c-8d7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8335860002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform"],"datePosted":"2026-04-18T15:42:34.365Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5736c99-e3e"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>Our backend teams span many domains across our essential service platforms. You might work on challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>We look for:</p>\n<ul>\n<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>\n</ul>\n<ul>\n<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>\n</ul>\n<ul>\n<li>Experience developing large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<ul>\n<li>Good knowledge of SQL.</li>\n</ul>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5736c99-e3e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8029674002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","AWS S3","Azure Blob Store","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:02.811Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7596a97b-13f"},"title":"Staff Software Engineer - Backend","description":"<p>We are seeking a Staff Software Engineer to join our team in Bengaluru, India. As a Staff Software Engineer, you will be responsible for designing, developing, and maintaining large-scale distributed systems. You will work closely with our team and product management to bring great user experiences to our customers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Build reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day.</li>\n<li>Develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>12+ years of production-level experience in one of: Python, Java, Scala, C++, or similar language.</li>\n<li>BS (or higher) in Computer Science, or a related field.</li>\n<li>Experience developing large-scale distributed systems from scratch.</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7596a97b-13f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8320187002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Apache Spark","Databricks","Kubernetes","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:45.514Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Apache Spark, Databricks, Kubernetes, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7179545-496"},"title":"Resident Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Resident Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<ul>\n<li>[Preferred] Databricks Certification but not essential</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7179545-496","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8367942002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","technical project delivery","documentation","white-boarding"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:41:31.345Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, technical project delivery, documentation, white-boarding, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d0793a44-d91"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d0793a44-d91","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461328002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:41:30.682Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Charlotte, North Carolina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5fd85b1e-563"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n<li>Nice to have: Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5fd85b1e-563","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8456965002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","design and deployment of highly performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:41:28.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dallas, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cece3778-5b8"},"title":"Finance Systems Integration Engineer","description":"<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>\n<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>\n<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Core Focus: Integration Development &amp; ERP Support</strong></p>\n<ul>\n<li>Design, build, and maintain integrations connecting ERP systems with downstream applications including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions</li>\n</ul>\n<ul>\n<li>Support integration development and testing during the ERP implementation projects</li>\n</ul>\n<ul>\n<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication</li>\n</ul>\n<ul>\n<li>Implement real-time and batch integration patterns supporting high-volume financial transactions</li>\n</ul>\n<ul>\n<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity</li>\n</ul>\n<ul>\n<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures</li>\n</ul>\n<ul>\n<li>Collaborate with implementation consulting partners and vendors on technical integration requirements</li>\n</ul>\n<p><strong>Additional Scope: AI Automation &amp; Data Infrastructure</strong></p>\n<ul>\n<li>Build and deploy Claude-powered AI agents that automate financial operations including intelligent document processing, workflow automation, financial audit and reconciliations, and self-service reporting</li>\n</ul>\n<ul>\n<li>Design agentic workflows that leverage Claude API capabilities integrated with ERP platform data and processes</li>\n</ul>\n<ul>\n<li>Create automated validation and quality assurance processes for AI-generated outputs</li>\n</ul>\n<ul>\n<li>Partner with Finance teams to identify automation opportunities and translate requirements into AI agent solutions</li>\n</ul>\n<ul>\n<li>Support data pipeline development using Airflow for workflow orchestration and dbt for data transformation</li>\n</ul>\n<ul>\n<li>Build and maintain data flows from ERP and other financial systems into BigQuery for analytics and reporting</li>\n</ul>\n<ul>\n<li>Implement data quality checks and testing frameworks for financial data pipelines</li>\n</ul>\n<ul>\n<li>Collaborate with Data Infrastructure team on pipeline architecture, performance optimization, and security monitoring</li>\n</ul>\n<ul>\n<li>Support executive dashboards and financial analytics by ensuring timely, accurate data delivery</li>\n</ul>\n<p><strong>Governance &amp; Collaboration</strong></p>\n<ul>\n<li>Maintain comprehensive documentation for integrations, AI agents, and data pipelines</li>\n</ul>\n<ul>\n<li>Support internal and external audits with technical evidence and system access reviews</li>\n</ul>\n<ul>\n<li>Collaborate with Finance Systems Engineers on operational support, troubleshooting, and enhancement requests</li>\n</ul>\n<ul>\n<li>Partner with Finance Operations, Accounting, FP&amp;A, Engineering, and Data Infrastructure teams to deliver holistic solutions</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>8+ years of experience in integration development, data engineering, or systems engineering roles</li>\n</ul>\n<ul>\n<li>Hands-on experience with iPaaS platforms such as Workato, MuleSoft, Dell Boomi, or similar integration tools</li>\n</ul>\n<ul>\n<li>Strong programming skills in Python and/or JavaScript/TypeScript for building custom integrations, APIs, and automation scripts</li>\n</ul>\n<ul>\n<li>Experience with data pipeline tools including Airflow for orchestration and dbt for transformation</li>\n</ul>\n<ul>\n<li>Working knowledge of cloud data platforms such as BigQuery, Snowflake, or Databricks</li>\n</ul>\n<ul>\n<li>Understanding of REST API design patterns, webhooks, OAuth 2.0, and modern integration architectures</li>\n</ul>\n<ul>\n<li>Familiarity with ERP systems (Oracle Fusion, Workday Financials, or similar) and financial business processes</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills with ability to debug complex integration issues across multiple systems</li>\n</ul>\n<ul>\n<li>Excellent communication skills to collaborate with technical and business stakeholders</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience with high-growth technology companies scaling through rapid revenue expansion (5x-10x growth)</li>\n</ul>\n<ul>\n<li>Background in AI/ML companies with familiarity in modern SaaS business models including consumption-based pricing, usage metering platforms, and marketplace billing</li>\n</ul>\n<ul>\n<li>Hands-on experience with specific platforms: Workday Financials (Workday Studio, EIB, custom reports, Prism Analytics)</li>\n</ul>\n<ul>\n<li>Technical expertise with modern finance tech stack including Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management</li>\n</ul>\n<ul>\n<li>Programming skills in Python / JavaScript, or similar languages for building custom integrations, APIs, and automation scripts</li>\n</ul>\n<ul>\n<li>Experience with AI/LLM integration for financial operations, including document processing, data extraction, intelligent automation, and agentic workflows (familiarity with Claude models and API is a plus)</li>\n</ul>\n<ul>\n<li>Hands-on experience with modern data stack tools: BigQuery/Snowflake/Databricks, dbt for data transformation, Airflow for workflow orchestration</li>\n</ul>\n<ul>\n<li>Professional certifications such as Workato, Workday integrations, or relevant technical credentials</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Information Systems, Accounting, Finance, Engineering, or related technical/business field</li>\n</ul>\n<ul>\n<li>Experience with business intelligence and financial reporting tools (Hex, Looker, Tableau, Power BI) for executive dashboards and financial analytics</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cece3778-5b8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155195008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000-$265,000 USD","x-skills-required":["integration development","data engineering","systems engineering","iPaaS platforms","Python","JavaScript/TypeScript","Airflow","dbt","BigQuery","Snowflake","Databricks","REST API design patterns","webhooks","OAuth 2.0","modern integration architectures","ERP systems","financial business processes"],"x-skills-preferred":["high-growth technology companies","AI/ML companies","SaaS business models","consumption-based pricing","usage metering platforms","marketplace billing","Workday Financials","Stripe","Salesforce","Zuora RevPro","Zip Procurement","Clearwater treasury systems","Pigment planning tools","Numeric close management","Python/JavaScript","AI/LLM integration","document processing","data extraction","intelligent automation","agentic workflows","Claude models","API","BigQuery/Snowflake/Databricks","professional certifications","Workato","Workday integrations","technical credentials","Computer Science","Information Systems","Accounting","Finance","Engineering","business intelligence","financial reporting tools"],"datePosted":"2026-04-18T15:39:50.764Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, Airflow, dbt, BigQuery, Snowflake, Databricks, REST API design patterns, webhooks, OAuth 2.0, modern integration architectures, ERP systems, financial business processes, high-growth technology companies, AI/ML companies, SaaS business models, consumption-based pricing, usage metering platforms, marketplace billing, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, Python/JavaScript, AI/LLM integration, document processing, data extraction, intelligent automation, agentic workflows, Claude models, API, BigQuery/Snowflake/Databricks, professional certifications, Workato, Workday integrations, technical credentials, Computer Science, Information Systems, Accounting, Finance, Engineering, business intelligence, financial reporting tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4475ebe1-e8a"},"title":"Data Engineering Intern","description":"<p>We&#39;re seeking a motivated and curious Data Engineering Intern to join our Data Platform team. This internship offers a unique opportunity to gain hands-on experience building and maintaining real data infrastructure within a fast-growing fintech environment.</p>\n<p>As a Data Engineering Intern, you&#39;ll collaborate on thoughtful projects and bring your fresh perspectives to impact our product and families. You&#39;ll assist in building and maintaining data pipelines using Airflow to orchestrate workflows that ingest, transform, and deliver data into Snowflake and Databricks.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Assist in building and maintaining data pipelines using Airflow to orchestrate workflows that ingest, transform, and deliver data into Snowflake and Databricks</li>\n<li>Support the design and implementation of data models in Snowflake that serve analytics, reporting, and ML use cases</li>\n<li>Help develop and maintain transformation logic using dbt, including writing models, tests, and documentation</li>\n<li>Contribute to data quality checks and validation processes to ensure accuracy, completeness, and timeliness of data</li>\n<li>Assist with infrastructure automation using Terraform to manage cloud resources in AWS</li>\n<li>Participate in troubleshooting data pipeline issues and investigating root causes alongside senior engineers</li>\n<li>Collaborate with data analysts, analytics engineers, and business stakeholders to understand requirements and contribute to technical solutions</li>\n<li>Help create and maintain documentation for data pipelines, data models, and infrastructure processes</li>\n<li>Participate in code reviews to develop best practices and learn from the team</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>A 3.0 GPA or higher</li>\n<li>Currently pursuing a Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Information Technology, or a related field</li>\n<li>Basic understanding of SQL and comfort manipulating data</li>\n<li>Interest in data engineering, data infrastructure, and/or analytics engineering</li>\n<li>Familiarity with Python for scripting, data processing, or automation (preferred)</li>\n<li>Basic understanding of cloud platforms, particularly AWS, is a plus</li>\n<li>Strong analytical thinking and problem-solving skills , especially comfort working through ambiguity</li>\n<li>Good communication and collaboration skills; able to work cross-functionally with technical and non-technical teammates</li>\n<li>Eagerness to take ownership of your work and ask thoughtful questions</li>\n</ul>\n<p>Learning Opportunities:</p>\n<ul>\n<li><p>You&#39;ll have the opportunity to gain experience with technologies including:</p>\n<ul>\n<li>Snowflake</li>\n<li>dbt (data build tool)</li>\n<li>Apache Airflow</li>\n<li>AWS (S3, Lambda, EC2, IAM)</li>\n<li>Databricks</li>\n<li>Terraform</li>\n<li>Fivetran</li>\n<li>Segment</li>\n</ul>\n</li>\n</ul>\n<p>This internship provides an excellent foundation for a career in data engineering, analytics engineering, or data architecture within the fintech industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4475ebe1-e8a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/b5d9d9b2-9d06-4db7-932c-30fd4a43825d","x-work-arrangement":"hybrid","x-experience-level":"intern","x-job-type":"internship","x-salary-range":null,"x-skills-required":["SQL","Python","Airflow","Snowflake","dbt","AWS","Databricks","Terraform","Fivetran","Segment"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:36:54.798Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, Python, Airflow, Snowflake, dbt, AWS, Databricks, Terraform, Fivetran, Segment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58df2f04-af4"},"title":"Data Engineer","description":"<p>We are looking for a Data Engineer to join our Data Platform team to partner with our product and business stakeholders across risk, operations, and other domains. As a Data Engineer, you will be responsible for building robust data pipelines and engineering foundations by ingesting data from disparate sources, ensuring data quality and consistency, and enabling better business decisions through reliable data infrastructure across core product areas.</p>\n<p>Your primary focus will be on building scalable data pipelines using Airflow to orchestrate data workflows that ingest, transform, and deliver data from various sources into Snowflake and Databricks. You will also design and implement data models in Snowflake that support analytics, reporting, and ML use cases with a focus on performance, reliability, and scalability.</p>\n<p>In addition, you will develop infrastructure as code using Terraform to automate and manage cloud resources in AWS, ensuring consistent and reproducible deployments. You will monitor data pipeline health and implement data quality checks to ensure accuracy, completeness, and timeliness of data as business needs evolve.</p>\n<p>You will also optimize data processing workflows to improve performance, reduce costs, and handle growing data volumes efficiently. Troubleshooting and resolving data pipeline issues, working through ambiguity to get to the root cause and implementing long-term fixes will be a key part of your role.</p>\n<p>As a Data Engineer, you will bridge gaps between data and the business by working with cross-functional teams across the US and India office to understand requirements and translate them into robust technical solutions. You will create comprehensive documentation on data pipelines, data models, and infrastructure, keeping documentation up to date and facilitating knowledge transfer across the team.</p>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>2+ years of data engineering experience with strong technical skills and the ability to architect scalable data solutions.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python for data processing, automation, and building data pipelines.</li>\n</ul>\n<ul>\n<li>Proficiency with workflow orchestration tools, preferably Airflow, including DAG development, task dependencies, and monitoring.</li>\n</ul>\n<ul>\n<li>Strong SQL skills and experience with cloud data warehouses like Snowflake, including performance optimization and data modeling.</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms, preferably AWS (S3, Lambda, EC2, IAM, etc.), and understanding of cloud-based data architectures.</li>\n</ul>\n<ul>\n<li>Experience working cross-functionally with data analysts, analytics engineers, data scientists, and business stakeholders to understand requirements and deliver solutions.</li>\n</ul>\n<ul>\n<li>An ownership mentality – this engineer will be responsible for the reliability and performance of their data pipelines and expected to fully understand data flows, dependencies, and their implications on downstream users.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience with dbt for transformation logic and analytics engineering workflows integrated with data pipelines.</li>\n</ul>\n<ul>\n<li>Familiarity with Databricks for large-scale data processing, including Spark optimization and Delta Lake.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and data infrastructure.</li>\n</ul>\n<ul>\n<li>Knowledge of data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, slowly changing dimensions).</li>\n</ul>\n<ul>\n<li>Experience with CI/CD practices for data pipelines and automated testing frameworks.</li>\n</ul>\n<ul>\n<li>Experience with streaming data and real-time processing frameworks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58df2f04-af4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/e98d9733-8b8c-4ce4-997d-6cf14e35b2f3","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Python","SQL","Snowflake","Databricks","AWS","Terraform","data engineering","data pipelines","data modeling"],"x-skills-preferred":["dbt","Infrastructure as Code","CI/CD","streaming data","real-time processing"],"datePosted":"2026-04-17T12:36:30.660Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Airflow, Python, SQL, Snowflake, Databricks, AWS, Terraform, data engineering, data pipelines, data modeling, dbt, Infrastructure as Code, CI/CD, streaming data, real-time processing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d849fbc-058"},"title":"Member of Product, Data Platform","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>\n<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>\n<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>\n<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>\n<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>\n<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>\n<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>\n<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>\n<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>\n<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>\n<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>\n<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>\n<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>\n</ul>\n<p><strong>Complexity and Impact of Work:</strong></p>\n<ul>\n<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>\n<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>\n<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>\n<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>\n</ul>\n<p><strong>Organizational Knowledge:</strong></p>\n<ul>\n<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>\n<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>\n<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>\n<li>Drive company objectives through strategic data platform decisions and initiatives.</li>\n</ul>\n<p><strong>Communication and Influence:</strong></p>\n<ul>\n<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>\n<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>\n<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>\n<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>\n</ul>\n<p><strong>You may be a fit for this role if you:</strong></p>\n<ul>\n<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>\n<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>\n<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>\n<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>\n<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>\n<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>\n<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>\n<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if you have:</strong></p>\n<ul>\n<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>\n<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>\n<li>You have experience supporting clients with data-related issues or concerns.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d849fbc-058","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","data infrastructure","data-intensive enterprise products","data warehousing","data lakes","ETL/ELT pipelines","modern data stack tooling","Snowflake","Databricks","dbt","Airflow","Spark","data modeling","data governance","data quality frameworks","blockchain or on-chain data"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:18:21.529Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67c97570-799"},"title":"Solutions Engineer (pre-sales) - Banking","description":"<p>Do you ever have the urge to do things better than the last time? We do. And it’s this urge that drives us every day. Our environment of discovery and innovation means we’re able to create deep and valuable relationships with our clients to create real change for them and their industries. It’s what got us here – and it’s what will make our future.</p>\n<p>At Quantexa, you’ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions. 41% of our colleagues come from an ethnic or religious minority background. We speak over 20+ languages across our 50+ nationalities, creating a sense of belonging for all.</p>\n<p>Join us in an exhilarating opportunity working with a determined and dynamic team of Solution Engineers, focused on North America. As a pivotal member of our regional force, you&#39;ll help drive Solution Engineering priorities across financial services industry vertical. Your influence will resonate in every facet of our commercial strategy, collaborating closely with regional peers in alliances, sales, marketing, and product management.</p>\n<p>We seek an individual with profound technical and business acumen, adept at demystifying complex concepts with clarity and precision. You&#39;ll dazzle with presentations, deliver captivating product demos, and dive deep into client opportunities, leaving a lasting impression at industry events and webinars. Become the authoritative voice for our solutions and technology in the region, forging trust with executives through relatable dialogue.</p>\n<p>In the Quantexa Solution Engineering (SE) team, you&#39;ll spearhead the initial connection between prospect organizations and the Go to Market team (GTM), setting the cornerstone for their journey. We seek an individual with profound technical and business acumen, adept at de-mystifying complex concepts with clarity and precision. You will:</p>\n<ul>\n<li>Uncover customer pain points: Use targeted, precise solution engineering techniques to identify the real challenges our prospects.</li>\n<li>Design and deliver tailored solutions: Create compelling presentations and deliver impactful product demos—adapting depth and content to resonate with each audience.</li>\n<li>Own the POC lifecycle: Manage the end-to-end proof of concept process. Define and align on success criteria, showcase sprint outcomes that clearly demonstrate value, and generate excitement around our solution.</li>\n<li>Ensure a smooth handover: Before transitioning to the delivery team, confirm a shared understanding of the customer’s pain and how the Quantexa solution addresses it.</li>\n<li>Be the face of Quantexa: Represent our brand with pride—whether engaging with prospects, supporting customers, or speaking at industry events and webinars.</li>\n</ul>\n<p>Are you ready to shape the future of Solution Engineering with us? Join our ranks and be part of this extraordinary journey!</p>\n<p>Probe and grasp prospect pain points, showcasing Quantexa&#39;s transformative solutions. Partner closely with Sales Directors to shape the opportunity strategy, qualify deals and influence account planning. Cultivate trust as the go-to advisor to clients, big and small, crafting compelling Value Engineering strategies. Craft tailored Proofs of Concept alongside the rest of GTM for maximum impact that capture the customers business problems. Lead the process of ‘Discovery’ and then ‘Solutionize’ with our prospects, to iterate and hone in on the right joint solution for all parties. Collaborate seamlessly with Quantexa&#39;s commercial teams, product experts and SMEs, embodying our core value of being a team player. Flawlessly lead product demos and presentations, qualify opportunities and manage compliance statements. Elevate Quantexa&#39;s profile through industry events such as SIBOS, ACAMS and thought leadership; and Provide feedback to Product and Engineering teams based on market and customer input to help shape roadmap opportunities.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Extensive experience in technology as a bridge between business and technical users /stakeholders in a commercially focused technical sales role e.g. Presales, Solution Engineering, Technical Sales, Solution Consulting, Product Management etc.</li>\n<li>Experience working with large banks or financial institutions including familiarity with buying cycles in regulated environments.</li>\n<li>Demonstrate technological mastery in data-powered, analytical solutions like Entity Resolution, Network Generation or Advanced Analytics.</li>\n<li>Proven track record of working in data management ecosystem with exposure to tools, technology and organizational data estate in world of analytics and AI.</li>\n<li>Ability to present to and excite senior leaders in Financial Services, leveraging technical acumen and deep knowledge of the banking ecosystem and establish trust and rapport with prospects.</li>\n<li>Command robust knowledge in technology and architecture particularly within the realm of enterprise software, such as big data technology, Elastic /OpenSearch, on-prem/on cloud deployment of open architectures.</li>\n<li>You’re all about the customer, and you’ll engage and captivate audiences of all sizes with an innate ability to simplify intricate concepts.</li>\n<li>Ability to travel across North America (25 – 40%) depending on customer needs and location; and</li>\n<li>Boast domain expertise across various realms of financial services (Risk, Anti-Financial Crime, Anti-fraud, AML, KYC, CRM) or in more general data management concepts (data quality, data warehousing, data fabric, etc.)</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience working with scale-up companies; and</li>\n<li>Experience to Quantexa’s partner ecosystem (AWS, Azure, GCP, Databricks, Elasticsearch).</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We offer:</p>\n<ul>\n<li>Competitive base salary of $160-200k</li>\n<li>Company bonus</li>\n<li>100% 401K match up to 5%</li>\n<li>Comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being</li>\n<li>Tax-advantageous benefits, such as commuter benefits, healthcare, and dependent care</li>\n<li>Competitive annual leave, parental leave, PTO, and observed holidays</li>\n<li>Well-being benefits, such as the Calm App and Wellbeing 1/2 days off</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67c97570-799","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/3sqpBBQqEAA7PhcSiLWVbu/hybrid-solutions-engineer-(pre-sales)---banking-in-jersey-city-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160-200k","x-skills-required":["data management","analytical solutions","Entity Resolution","Network Generation","Advanced Analytics","big data technology","Elastic /OpenSearch","on-prem/on cloud deployment of open architectures"],"x-skills-preferred":["scale-up companies","AWS","Azure","GCP","Databricks","Elasticsearch"],"datePosted":"2026-03-09T17:03:47.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Jersey City, New Jersey"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"data management, analytical solutions, Entity Resolution, Network Generation, Advanced Analytics, big data technology, Elastic /OpenSearch, on-prem/on cloud deployment of open architectures, scale-up companies, AWS, Azure, GCP, Databricks, Elasticsearch","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4f6649cb-d3f"},"title":"Solutions Engineer (pre-sales) - North America - Public Sector","description":"<p>Do you ever have the urge to do things better than the last time? We do. And it’s this urge that drives us every day. Our environment of discovery and innovation means we’re able to create deep and valuable relationships with our clients to create real change for them and their industries. It’s what got us here – and it’s what will make our future.</p>\n<p>We are looking for an exceptional candidate to join our high-performing public sector focused team of Solution Engineers in North America. You will collaborate closely with regional peers across Alliances, Sales, Marketing, and Product to help win strategic sales opportunities.</p>\n<p>We are seeking someone with strong technical and business solution expertise who can articulate complex technical concepts in a clear, compelling way. You will deliver impactful presentations, product demonstrations, and deep dives across customer engagements and industry events. As a spokesperson for Quantexa&#39;s technology, you’ll build lasting trust with stakeholders by engaging them in relevant, value-driven conversations.</p>\n<p>The Quantexa Solution Engineering team is the first touchpoint for prospective organisations, shaping the early stages of the customer journey and setting the direction for a successful engagement. In this role, key responsibilities include:</p>\n<ul>\n<li>Communicate the value and impact of the Quantexa Decision Intelligence Platform, helping prospects understand how it enables smarter, faster, and more confident decision-making through connected, trusted data</li>\n</ul>\n<ul>\n<li>Identify and position the right Quantexa solution tailored to each prospect’s specific needs</li>\n</ul>\n<ul>\n<li>Guide prospects through a clear, outcome-driven solution strategy that sets the stage for long-term value and success</li>\n</ul>\n<p>Are you ready to shape the future of Solution Engineering with us? Join our ranks and be part of this extraordinary journey!</p>\n<p>Location: This role will be a remote based contract but we require candidates based in either Washington DC, Maryland or Virginia. The role may require travel to customer sites where appropriate.</p>\n<p><strong>What you’ll be doing.</strong></p>\n<ul>\n<li>Manage  the technical aspects of the sales cycle, working alongside Sales, Architects, Product, and SMEs</li>\n</ul>\n<ul>\n<li>Contribute to  winning proposals, including RFPs and RFIs</li>\n</ul>\n<ul>\n<li>Take ownership of  the design and execution of PoCs and trials, ensuring alignment with customer objectives</li>\n</ul>\n<ul>\n<li>Collaborate on  the design and iteration of tailored solutions, working with partners, customers, and internal teams to define the right approach</li>\n</ul>\n<ul>\n<li>Deliver tailored, adapt and deliver customised product demonstrations, both in-person and remotely, that engage a range of stakeholders from users to executives</li>\n</ul>\n<ul>\n<li>Facilitate  discovery sessions with prospects and partners, uncovering core business challenges and technical needs</li>\n</ul>\n<p><strong><strong>Requirements</strong></strong></p>\n<p><strong>What you’ll bring:</strong></p>\n<ul>\n<li>Experience in a commercially focused, technical role such as Solution Engineering, Presales, Technical Sales, or Value Consulting</li>\n<li>Great interpersonal, written, and verbal communication skills, with a track record of effective customer engagement</li>\n<li>You are a great communicator, able to captivate audiences of all sizes with an innate ability to simplify complex concepts for diverse audiences.</li>\n<li>You will boast domain expertise across multiple areas of government (Tax or VAT evasion; customs and border risk assessments; public security; and benefits fraud, waste, and abuse).</li>\n<li>Proven experience in at least one solution area. E.g. Financial Crime, with a good understanding of both technical and business aspects of the solution.</li>\n<li>Led at least two extensive proving engagements, and contributed to five or more overall. Proving engagements include PoCs or early implementation phases, or alternatively, all of the following activities: technical workshops, prototypes, bespoke demonstrations, and business cases.</li>\n<li>Confident in delivering engaging presentations and pitches that connect functionality to business value</li>\n<li>Experience building demos, templates, or sharing best practices that contribute to team enablement</li>\n<li>Experience mentoring or supporting peers through knowledge sharing or guidance</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Strong expertise in data-driven, analytical solutions such as Entity Resolution, Graph Generation, Advanced Analytics, and MDM</li>\n</ul>\n<ul>\n<li>Domain knowledge in other areas such as Risk, Anti-Financial Crime, AML, KYC, CRM</li>\n</ul>\n<ul>\n<li>Technical expertise in data storage, processing and containerisation technology such as Cloud or On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search</li>\n</ul>\n<p><strong><strong>Benefits</strong></strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<p>We offer:</p>\n<ul>\n<li>Competitive base salary of $155-175k</li>\n<li>Company bonus</li>\n<li>100% 401K match up to 5%</li>\n<li>Comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being</li>\n<li>Tax-advantageous benefits, such as commuter benefits, healthcare, and dependent care</li>\n<li>Competitive annual leave, parental leave, PTO, and observed holidays</li>\n<li>Well-being benefits, such as the Calm App and Wellbeing 1/2 days off</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p><strong>Our mission.</strong></p>\n<p>We have one mission. To help businesses grow. To make data easier. And to make the world a better place. We’re not a start-up. Not anymore. But we’ve not been around that long either. What we are is a collection of bright, passionate minds harnessing complexities and helping our clients and their communities. One culture, made of many. Heading in one direction – the future.</p>\n<p><strong>It’s all about you.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4f6649cb-d3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/aBoTm2iTNKeUe8DdSwwCRL/remote-solutions-engineer-(pre-sales)---north-america---public-sector-in-washington-at-quantexa","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155-175k","x-skills-required":["Solution Engineering","Presales","Technical Sales","Value Consulting","Entity Resolution","Graph Generation","Advanced Analytics","MDM","Risk","Anti-Financial Crime","AML","KYC","CRM","Cloud","On-prem Distributed File Systems","Spark","Databricks","Kubernetes","Open Search","Elastic Search"],"x-skills-preferred":[],"datePosted":"2026-03-09T17:02:36.839Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, District of Columbia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Solution Engineering, Presales, Technical Sales, Value Consulting, Entity Resolution, Graph Generation, Advanced Analytics, MDM, Risk, Anti-Financial Crime, AML, KYC, CRM, Cloud, On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_653f0b0f-e50"},"title":"Solutions Engineer (pre-sales) - North America - Public Sector","description":"<p>We are looking for an exceptional Solutions Engineer to join our high-performing public sector focused team in North America. As a Solutions Engineer, you will collaborate closely with regional peers across Alliances, Sales, Marketing, and Product to help win strategic sales opportunities.</p>\n<p>Our ideal candidate will have strong technical and business solution expertise, with the ability to articulate complex technical concepts in a clear and compelling way. You will deliver impactful presentations, product demonstrations, and deep dives across customer engagements and industry events.</p>\n<p>The Quantexa Solution Engineering team is the first touchpoint for prospective organisations, shaping the early stages of the customer journey and setting the direction for a successful engagement. In this role, key responsibilities include:</p>\n<ul>\n<li>Communicating the value and impact of the Quantexa Decision Intelligence Platform, helping prospects understand how it enables smarter, faster, and more confident decision-making through connected, trusted data</li>\n<li>Identifying and positioning the right Quantexa solution tailored to each prospect&#39;s specific needs</li>\n<li>Guiding prospects through a clear, outcome-driven solution strategy that sets the stage for long-term value and success</li>\n</ul>\n<p>If you are a great communicator, able to captivate audiences of all sizes with an innate ability to simplify complex concepts for diverse audiences, and have domain expertise across multiple areas of government, we want to hear from you.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience in a commercially focused, technical role such as Solution Engineering, Presales, Technical Sales, or Value Consulting</li>\n<li>Great interpersonal, written, and verbal communication skills, with a track record of effective customer engagement</li>\n<li>Domain expertise across multiple areas of government (Tax or VAT evasion; customs and border risk assessments; public security; and benefits fraud, waste, and abuse)</li>\n<li>Proven experience in at least one solution area, with a good understanding of both technical and business aspects of the solution</li>\n<li>Confident in delivering engaging presentations and pitches that connect functionality to business value</li>\n<li>Experience building demos, templates, or sharing best practices that contribute to team enablement</li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Strong expertise in data-driven, analytical solutions such as Entity Resolution, Graph Generation, Advanced Analytics, and MDM</li>\n<li>Domain knowledge in other areas such as Risk, Anti-Financial Crime, AML, KYC, CRM</li>\n<li>Technical expertise in data storage, processing and containerisation technology such as Cloud or On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We offer a competitive base salary of $155-175k, company bonus, 100% 401K match up to 5%, comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being, tax-advantageous benefits, such as commuter benefits, healthcare, and dependent care, competitive annual leave, parental leave, PTO, and observed holidays, well-being benefits, such as the Calm App and Wellbeing 1/2 days off, continuous Training and Development, including access to Udemy Business, Work from Anywhere Scheme, Employee Referral Program, Team Social Budget &amp; Company-wide Socials.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_653f0b0f-e50","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/8WC3MHsY9JC6vEnRvydSW5/remote-solutions-engineer-(pre-sales)---north-america---public-sector-in-virginia-at-quantexa","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155-175k","x-skills-required":["Solution Engineering","Presales","Technical Sales","Value Consulting","Entity Resolution","Graph Generation","Advanced Analytics","MDM","Risk","Anti-Financial Crime","AML","KYC","CRM","Cloud","On-prem Distributed File Systems","Spark","Databricks","Kubernetes","Open Search","Elastic Search"],"x-skills-preferred":["Data-driven, analytical solutions","Domain knowledge in other areas","Technical expertise in data storage, processing and containerisation technology"],"datePosted":"2026-03-09T17:02:24.138Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Virginia, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Solution Engineering, Presales, Technical Sales, Value Consulting, Entity Resolution, Graph Generation, Advanced Analytics, MDM, Risk, Anti-Financial Crime, AML, KYC, CRM, Cloud, On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search, Data-driven, analytical solutions, Domain knowledge in other areas, Technical expertise in data storage, processing and containerisation technology","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7af16166-8fd"},"title":"FBS Senior Data Domain Architect","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p><strong>What to expect on your journey with us:</strong></p>\n<ul>\n<li>A solid and innovative company with a strong market presence</li>\n<li>A dynamic, diverse, and multicultural work environment</li>\n<li>Leaders with deep market knowledge and strategic vision</li>\n<li>Continuous learning and development</li>\n</ul>\n<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>\n<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>\n<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>\n<li>Reviews high level design to ensure alignment to Solution Architecture.</li>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>\n<li>Mentor developers and create reference implementations/frameworks.</li>\n<li>Partners with System Architects to elaborate capabilities and features.</li>\n<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Over 6 years of experience as a senior domain architect for Data domains</li>\n<li>Advanced English Level</li>\n<li>Masters&#39; degree (PLUS)</li>\n<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>\n</ul>\n<p><strong>Technical &amp; Business Skills:</strong></p>\n<ul>\n<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>\n<li>Data Architecture / Data Modeling – Advanced (MUST)</li>\n<li>Data Warehouse – Advanced (MUST)</li>\n<li>Cloud Data Platforms - Advanced</li>\n<li>Data Integration Tools – Advanced</li>\n<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>\n<li>Any Cloud - Intermediate (4-6 Years)</li>\n<li>Power BI or Tableau - Intermediate (4-6 Years)</li>\n<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>\n<li>Data Lakehouse – Intermediate (MUST)</li>\n</ul>\n<ul>\n<li>Data Governance - Intermediate</li>\n<li>AI/ML - Entry Level (PLUS)</li>\n<li>Master Data Management - Intermediate</li>\n<li>Operational Data Management - Intermediate</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<p>This position comes with a competitive compensation and benefits package.</p>\n<ul>\n<li>A competitive salary and performance-based bonuses.</li>\n<li>Comprehensive benefits package.</li>\n<li>Flexible work arrangements (remote and/or office-based).</li>\n<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>\n<li>Private Health Insurance.</li>\n<li>Paid Time Off.</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7af16166-8fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/jdUFHSPZZjHsgd3TR4R3BS/remote-fbs-senior-data-domain-architect-in-colombia-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse"],"x-skills-preferred":["Data Governance","AI/ML","Master Data Management","Operational Data Management"],"datePosted":"2026-03-09T17:00:36.230Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, AI/ML, Master Data Management, Operational Data Management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f85c4a65-8fb"},"title":"Solutions Engineer (pre-sales) - North America - Public Sector","description":"<p>Do you ever have the urge to do things better than the last time? We do. And it’s this urge that drives us every day. Our environment of discovery and innovation means we’re able to create deep and valuable relationships with our clients to create real change for them and their industries. It’s what got us here – and it’s what will make our future.</p>\n<p>We are looking for an exceptional candidate to join our high-performing public sector focused team of Solution Engineers in North America. You will collaborate closely with regional peers across Alliances, Sales, Marketing, and Product to help win strategic sales opportunities.</p>\n<p>We are seeking someone with strong technical and business solution expertise who can articulate complex technical concepts in a clear, compelling way. You will deliver impactful presentations, product demonstrations, and deep dives across customer engagements and industry events. As a spokesperson for Quantexa&#39;s technology, you’ll build lasting trust with stakeholders by engaging them in relevant, value-driven conversations.</p>\n<p>The Quantexa Solution Engineering team is the first touchpoint for prospective organisations, shaping the early stages of the customer journey and setting the direction for a successful engagement. In this role, key responsibilities include:</p>\n<ul>\n<li>Communicate the value and impact of the Quantexa Decision Intelligence Platform, helping prospects understand how it enables smarter, faster, and more confident decision-making through connected, trusted data</li>\n</ul>\n<ul>\n<li>Identify and position the right Quantexa solution tailored to each prospect’s specific needs</li>\n</ul>\n<ul>\n<li>Guide prospects through a clear, outcome-driven solution strategy that sets the stage for long-term value and success</li>\n</ul>\n<p>Are you ready to shape the future of Solution Engineering with us? Join our ranks and be part of this extraordinary journey!</p>\n<p>Location: This role will be a remote based contract but we require candidates based in either Washington DC, Maryland or Virginia. The role may require travel to customer sites where appropriate.</p>\n<p><strong>What you’ll be doing.</strong></p>\n<ul>\n<li>Manage  the technical aspects of the sales cycle, working alongside Sales, Architects, Product, and SMEs</li>\n</ul>\n<ul>\n<li>Contribute to  winning proposals, including RFPs and RFIs</li>\n</ul>\n<ul>\n<li>Take ownership of  the design and execution of PoCs and trials, ensuring alignment with customer objectives</li>\n</ul>\n<ul>\n<li>Collaborate on  the design and iteration of tailored solutions, working with partners, customers, and internal teams to define the right approach</li>\n</ul>\n<ul>\n<li>Deliver tailored, adapt and deliver customised product demonstrations, both in-person and remotely, that engage a range of stakeholders from users to executives</li>\n</ul>\n<ul>\n<li>Facilitate  discovery sessions with prospects and partners, uncovering core business challenges and technical needs</li>\n</ul>\n<p><strong><strong>Requirements</strong></strong></p>\n<p><strong>What you’ll bring:</strong></p>\n<ul>\n<li>Experience in a commercially focused, technical role such as Solution Engineering, Presales, Technical Sales, or Value Consulting</li>\n<li>Great interpersonal, written, and verbal communication skills, with a track record of effective customer engagement</li>\n<li>You are a great communicator, able to captivate audiences of all sizes with an innate ability to simplify complex concepts for diverse audiences.</li>\n<li>You will boast domain expertise across multiple areas of government (Tax or VAT evasion; customs and border risk assessments; public security; and benefits fraud, waste, and abuse).</li>\n<li>Proven experience in at least one solution area. E.g. Financial Crime, with a good understanding of both technical and business aspects of the solution.</li>\n<li>Led at least two extensive proving engagements, and contributed to five or more overall. Proving engagements include PoCs or early implementation phases, or alternatively, all of the following activities: technical workshops, prototypes, bespoke demonstrations, and business cases.</li>\n<li>Confident in delivering engaging presentations and pitches that connect functionality to business value</li>\n<li>Experience building demos, templates, or sharing best practices that contribute to team enablement</li>\n<li>Experience mentoring or supporting peers through knowledge sharing or guidance</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Strong expertise in data-driven, analytical solutions such as Entity Resolution, Graph Generation, Advanced Analytics, and MDM</li>\n</ul>\n<ul>\n<li>Domain knowledge in other areas such as Risk, Anti-Financial Crime, AML, KYC, CRM</li>\n</ul>\n<ul>\n<li>Technical expertise in data storage, processing and containerisation technology such as Cloud or On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search</li>\n</ul>\n<p><strong><strong>Benefits</strong></strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<p>We offer:</p>\n<ul>\n<li>Competitive base salary of $155-175k</li>\n<li>Company bonus</li>\n<li>100% 401K match up to 5%</li>\n<li>Comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being</li>\n<li>Tax-advantageous benefits, such as commuter benefits, healthcare, and dependent care</li>\n<li>Competitive annual leave, parental leave, PTO, and observed holidays</li>\n<li>Well-being benefits, such as the Calm App and Wellbeing 1/2 days off</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p><strong>Our mission.</strong></p>\n<p>We have one mission. To help businesses grow. To make data easier. And to make the world a better place. We’re not a start-up. Not anymore. But we’ve not been around that long either. What we are is a collection of bright, passionate minds harnessing complexities and helping our clients and their communities. One culture, made of many. Heading in one direction – the future.</p>\n<p><strong>It’s all about you.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f85c4a65-8fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/vprXbb3DTsP5zadieaoQRq/remote-solutions-engineer-(pre-sales)---north-america---public-sector-in-maryland-at-quantexa","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155-175k","x-skills-required":["Solution Engineering","Presales","Technical Sales","Value Consulting","Entity Resolution","Graph Generation","Advanced Analytics","MDM","Risk","Anti-Financial Crime","AML","KYC","CRM","Cloud","On-prem Distributed File Systems","Spark","Databricks","Kubernetes","Open Search","Elastic Search"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:59:14.584Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Maryland, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Solution Engineering, Presales, Technical Sales, Value Consulting, Entity Resolution, Graph Generation, Advanced Analytics, MDM, Risk, Anti-Financial Crime, AML, KYC, CRM, Cloud, On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7b03b30a-b20"},"title":"FBS Senior Data Domain Architect","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>\n<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>\n<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p>**Key Responsibilities:*</p>\n<ul>\n<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>\n</ul>\n<ul>\n<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>\n</ul>\n<ul>\n<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>\n</ul>\n<ul>\n<li>Reviews high level design to ensure alignment to Solution Architecture.</li>\n</ul>\n<ul>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>\n</ul>\n<ul>\n<li>Mentor developers and create reference implementations/frameworks.</li>\n</ul>\n<ul>\n<li>Partners with System Architects to elaborate capabilities and features.</li>\n</ul>\n<ul>\n<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7b03b30a-b20","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/1U952YA2QBa8zK7Tm5d3Lm/remote-fbs-senior-data-domain-architect-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse","Data Governance","Master Data Management","Operational Data Management"],"x-skills-preferred":["AI/ML"],"datePosted":"2026-03-09T16:59:14.361Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, Master Data Management, Operational Data Management, AI/ML"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f41cb29a-40e"},"title":"Solutions Engineer - Banking - France - EMEA","description":"<p><strong>Solutions Engineer - Banking - France - EMEA</strong></p>\n<p><strong>The Opportunity</strong></p>\n<p>We are looking for an exceptional candidate to join our high-performing team of Solution Engineers in EMEA and be based in France. You will collaborate closely with regional peers across Alliances, Sales, Marketing, and Product to help win strategic sales opportunities.</p>\n<p><strong>What We&#39;re All About</strong></p>\n<p>At Quantexa, you&#39;ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions. 41% of our colleagues come from an ethnic or religious minority background. We speak over 20 languages across our 50+nationalities, creating a sense of belonging for all.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Communicate the value and impact of the Quantexa Decision Intelligence Platform, helping prospects understand how it enables smarter, faster, and more confident decision-making through connected, trusted data</li>\n<li>Identify and position the right Quantexa solution tailored to each prospect&#39;s specific needs</li>\n<li>Guide prospects through a clear, outcome-driven solution strategy that sets the stage for long-term value and success</li>\n</ul>\n<p><strong>What You&#39;ll Be Doing</strong></p>\n<ul>\n<li>Supporting Solution Engineering activities for sales opportunities under the guidance of senior team members</li>\n<li>Collaborating with Quantexa&#39;s commercial teams, product experts, and SMEs to deliver impactful pre-sales engagements</li>\n<li>Learning to identify and articulate customer challenges, showcasing Quantexa&#39;s solutions with clarity and confidence</li>\n<li>Assisting with product demos and presentations to prospects, and gaining exposure to opportunity qualification processes</li>\n<li>Supporting the design and execution of tailored Proof of Concepts (POCs) that address key customer business problems</li>\n<li>Communicating the Quantexa platform&#39;s value effectively, to diverse audiences</li>\n<li>Representing Quantexa at industry events and contributing to thought leadership as your expertise grows</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience in a commercially focused, technical role such as Solution Engineering, Presales, Technical Sales, or Value Consulting selling software solutions to the French banking sector (preferably tier 1 banks)</li>\n<li>Excellent interpersonal, written, and verbal communication skills, with a track record of effective customer engagement</li>\n<li>Exceptional communicator, able to captivate audiences of all sizes with an innate ability to simplify complex concepts for diverse audiences</li>\n<li>Proven experience across at least two solution areas [e.g. Financial Crime, Data Management, Customer 360], with deep expertise in at least one of them within the banking sector, focused on either technical or business capabilities</li>\n<li>Strong technology knowledge, particularly in enterprise software environments</li>\n<li>Successfully led a minimum of five extensive proving engagements. Proving engagements include PoCs or early implementation phases, or alternatively, all of the following activities: technical workshops, prototypes, bespoke demonstrations, and business cases</li>\n<li>Confident in presenting and pitching solutions to senior stakeholders at Tier 1 banking organisations</li>\n<li>Proven ability to drive innovation and best practices that elevate Solution Engineering performance, with multiple examples of successful implementation</li>\n<li>Fluent in French and English. Other languages would be a bonus but not essential</li>\n</ul>\n<p><strong>Desirable Experience</strong></p>\n<ul>\n<li>Strong expertise in data-driven, analytical solutions such as Entity Resolution, Graph Generation, Advanced Analytics, and MDM</li>\n<li>Domain knowledge in areas such as Risk, Anti-Financial Crime, AML, KYC, CRM, Customer Intelligence</li>\n<li>Technical expertise in data storage, processing and containerisation technology such as Cloud or On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We offer:</p>\n<ul>\n<li>Competitive salary</li>\n<li>Commission (uncapped)</li>\n<li>Private healthcare, Life Insurance &amp; Income Protection</li>\n<li>Cycle Scheme and TechScheme</li>\n<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>\n<li>Pension Scheme with a company contribution of 6% (if you contribute 3%)</li>\n<li>25 days annual leave (with the option to buy up to 5 days) + birthday off!</li>\n<li>Ongoing personal development</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f41cb29a-40e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/3oAfsxM4e3tKcmk3oF873y/hybrid-solutions-engineer---banking---france---emea-in-paris-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Solution Engineering","Presales","Technical Sales","Value Consulting","French banking sector","Tier 1 banks","Enterprise software environments","Data storage","Data processing","Containerisation technology","Cloud","On-prem Distributed File Systems","Spark","Databricks","Kubernetes","Open Search","Elastic Search"],"x-skills-preferred":["Entity Resolution","Graph Generation","Advanced Analytics","MDM","Risk","Anti-Financial Crime","AML","KYC","CRM","Customer Intelligence"],"datePosted":"2026-03-09T16:58:56.314Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Solution Engineering, Presales, Technical Sales, Value Consulting, French banking sector, Tier 1 banks, Enterprise software environments, Data storage, Data processing, Containerisation technology, Cloud, On-prem Distributed File Systems, Spark, Databricks, Kubernetes, Open Search, Elastic Search, Entity Resolution, Graph Generation, Advanced Analytics, MDM, Risk, Anti-Financial Crime, AML, KYC, CRM, Customer Intelligence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aafa7b92-fa6"},"title":"Senior Consultant - Data Engineering & Data Science (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>\n<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>\n<p><strong>Your Tasks</strong></p>\n<ul>\n<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>\n<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>\n<li>Set up data pipelines for analytical workloads</li>\n<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>\n</ul>\n<p><strong>What You Bring - Required</strong></p>\n<ul>\n<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>\n<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>\n<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>\n</ul>\n<p><strong>Data Engineering Fundamentals</strong></p>\n<ul>\n<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>\n<li>Solid understanding of data modeling, data transformations, and feature engineering</li>\n<li>Experience with cloud-based data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n</ul>\n<p><strong>Applied Data Science &amp; Analytics</strong></p>\n<ul>\n<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>\n<li>Very good programming skills in Python</li>\n<li>Very good SQL skills and experience with relational databases</li>\n<li>Experience in deploying and productively using ML models</li>\n<li>Ability to translate analytical results into business-relevant insights</li>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with:</li>\n</ul>\n<ol>\n<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>\n<li>Time series analysis, NLP applications, or system modeling</li>\n<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>\n<li>Docker and Kubernetes</li>\n<li>Data visualization tools like Power BI, Tableau</li>\n<li>Cloud or architecture certifications</li>\n</ol>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>Your Team</strong></p>\n<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>\n<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aafa7b92-fa6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Science","Machine Learning","Deep Learning","GenAI","Data Engineering","Data Warehousing","Data Lakes","Lakehouses","Data Pipelines","Cloud-based Data Platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse","Microsoft Fabric","CI/CD","Python","SQL","Relational Databases"],"x-skills-preferred":["Streaming Technologies","Time Series Analysis","NLP Applications","System Modeling","NoSQL Databases","Docker","Kubernetes","Data Visualization Tools","Cloud Certifications","Architecture Certifications"],"datePosted":"2026-03-09T16:55:58.580Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Bavaria, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcfed817-412"},"title":"FBS Senior Data Domain Architect","description":"<p>We&#39;re looking for a Senior Data Domain Architect to join our team. As a Senior Data Domain Architect, you will design and develop Data/Domain IT architecture solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p><strong>What to expect on your journey with us:</strong></p>\n<ul>\n<li>A solid and innovative company with a strong market presence</li>\n<li>A dynamic, diverse, and multicultural work environment</li>\n<li>Leaders with deep market knowledge and strategic vision</li>\n<li>Continuous learning and development</li>\n</ul>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Utilize in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery</li>\n<li>Solve complex problems and partner effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization</li>\n<li>Work independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge</li>\n<li>Review high level design to ensure alignment to Solution Architecture</li>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives</li>\n<li>Mentor developers and create reference implementations/frameworks</li>\n<li>Partner with System Architects to elaborate capabilities and features</li>\n<li>Deliver single domain architecture solutions and execute continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Over 6 years of experience as a senior domain architect for Data domains</li>\n<li>Advanced English Level</li>\n<li>Masters&#39; degree (PLUS)</li>\n<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>\n</ul>\n<p><strong>Technical &amp; Business Skills:</strong></p>\n<ul>\n<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>\n<li>Data Architecture / Data Modeling – Advanced (MUST)</li>\n<li>Data Warehouse – Advanced (MUST)</li>\n<li>Cloud Data Platforms - Advanced</li>\n<li>Data Integration Tools – Advanced</li>\n<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>\n<li>Any Cloud - Intermediate (4-6 Years)</li>\n<li>Power BI or Tableau - Intermediate (4-6 Years)</li>\n<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>\n<li>Data Lakehouse – Intermediate (MUST)</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>A competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Private Health Insurance</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcfed817-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/x7tKXYFBB815ca6oBV5T2E/remote-fbs-senior-data-domain-architect-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:53:31.425Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ee2fcbdc-fc4"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ee2fcbdc-fc4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","Azure, AWS or GCP","Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric","Postgres, SQL Server, Oracle","Cosmos DB, MongoDB, InfluxDB","API-based and event-driven architectures","Docker / Kubernetes"],"x-skills-preferred":["Advanced analytics, AI / ML or GenAI","Streaming platforms (e.g. Kafka, Azure Event Hubs)","Data governance or metadata tools","Cloud, data, or architecture certifications"],"datePosted":"2026-03-09T16:52:06.783Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56dc9a51-e66"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56dc9a51-e66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["enterprise data architecture","system data integration","data engineering","analytics","modern data architectures","Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","cloud data platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","SQL","relational databases","Postgres","SQL Server","Oracle","NoSQL databases","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","data migration programmes","data pipelines","orchestration","automation","CI/CD concepts","production-grade deployments","distributed systems","Docker","Kubernetes","data management and governance principles","data quality","metadata","lineage","master data management","data management software and tools","security","access control","compliance considerations","Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience"],"x-skills-preferred":["advanced analytics","AI / ML or GenAI","streaming platforms","Kafka","Azure Event Hubs","data governance or metadata tools","cloud","data","architecture certifications"],"datePosted":"2026-03-09T16:51:22.857Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbb19758-f83"},"title":"Principal Consultant Data Architecture (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most significant challenges of our clients? We are growing further and seeking engaged individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across themes and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Principal Consultant Data Architecture, you will be the technical leader in complex data and analytics projects. You will design and be responsible for comprehensive enterprise data architectures, lead technical teams, and be a trusted technical advisor for customers and internal stakeholders.</p>\n<p>You will ensure that enterprise data and analytics solutions are scalable, secure, and operational, translate technical requirements into robust technical images, and plan the introduction.</p>\n<p><strong>Your Tasks:</strong></p>\n<ul>\n<li>Definition and governance of target architectures for enterprise data, integration, and analytics in cloud and hybrid environments</li>\n<li>Translation of business goals into scalable, secure, and compliant architectures</li>\n<li>Leadership of the conception of comprehensive end-to-end data solutions (data intake, data integration, storage, security, processing, analytics, AI support)</li>\n<li>Steering and accompanying delivery teams during implementation, rollout, and establishment of operational readiness</li>\n<li>Senior technical contact person for architects, IT managers, and technical teams of customers</li>\n<li>Mentoring of system and data architects as well as programmers</li>\n<li>Participation in the further development of best practices and reference architectures</li>\n<li>Support of presales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>What You Bring - Minimum Requirements</strong></p>\n<p><strong>Experience &amp; Seniority</strong></p>\n<ul>\n<li>At least 5 years of relevant professional experience in enterprise data architecture, data integration, data engineering, or analytics</li>\n<li>Experience in leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong customer and advisory experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>In-depth expertise in modern data architectures, particularly:</li>\n</ul>\n<ol>\n<li>Data Mesh / Data Fabric / Data Lake / Data Warehouse Architectures</li>\n<li>Principles of modern data architecture designs</li>\n<li>Integration patterns for batch and streaming data</li>\n<li>Data platform, DevOps, deployment, and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n</ol>\n<ul>\n<li>Practical experience with cloud data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Very good SQL knowledge as well as experience with relational databases (e.g. PostgreSQL, SQL-Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Good understanding of API-based and event-driven architectures</li>\n<li>Experience in conceiving and steering enterprise data migration programs (including mapping, transformation rules, data quality measures, etc.)</li>\n</ul>\n<p><strong>Engineering &amp; Platform Fundamentals</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes knowledge is an advantage</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Very good understanding of data management and governance principles, particularly:</li>\n</ul>\n<ol>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data management software and tools</li>\n<li>Security, access, and compliance requirements</li>\n</ol>\n<ul>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with advanced analytics, AI/ML, or GenAI from an architect&#39;s perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Practical experience with data governance or metadata tools</li>\n<li>Cloud or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>About Your Team</strong></p>\n<p>You will become part of our growing data and analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of data and analytics strategy, data management and governance, data platforms and engineering, as well as analytics and data science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm that is at the forefront of industry disruption. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is one of the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five years in a row.</p>\n<p>We offer a market-leading remuneration, attractive additional benefits, as well as excellent further education and development opportunities. Have you become curious? Then we look forward to your application</p>\n<p>More about Infosys Consulting - Europe</p>\n<p><strong>Visit website</strong></p>\n<p>Where Innovation meets Excellence.</p>\n<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>\n<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbb19758-f83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/sve4gTuNFLf3RtEjhQMzHp/remote-principal-consultant-data-architecture-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh","Data Fabric","Data Lake","Data Warehouse Architectures","Principles of modern data architecture designs","Integration patterns for batch and streaming data","Data platform, DevOps, deployment, and security architectures","Analytics and AI enablement architectures","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","PostgreSQL","SQL-Server","Oracle","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","Enterprise data migration programs"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:50:38.864Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh, Data Fabric, Data Lake, Data Warehouse Architectures, Principles of modern data architecture designs, Integration patterns for batch and streaming data, Data platform, DevOps, deployment, and security architectures, Analytics and AI enablement architectures, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, PostgreSQL, SQL-Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Enterprise data migration programs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ed5984b-196"},"title":"Solutions Engineer (pre-sales) - Banking","description":"<p>Are you looking for a challenging role that combines technical and business skills? Do you want to work with a dynamic team that is passionate about making data easier? We are seeking a Solutions Engineer (pre-sales) to join our team in New York.</p>\n<p>As a Solutions Engineer, you will be responsible for uncovering customer pain points, designing and delivering tailored solutions, and owning the proof of concept lifecycle. You will work closely with sales directors to shape opportunity strategy, qualify deals, and influence account planning. You will also cultivate trust as the go-to advisor to clients, big and small, crafting compelling value engineering strategies.</p>\n<p>To be successful in this role, you will need to have extensive experience in technology as a bridge between business and technical users/stakeholders in a commercially focused technical sales role. You will also need to have experience working with large banks or financial institutions, including familiarity with buying cycles in regulated environments.</p>\n<p>In addition to your technical skills, you will need to have excellent communication and presentation skills, with the ability to present to and excite senior leaders in financial services. You will also need to be able to travel across North America (25-40%) depending on customer needs and location.</p>\n<p>We offer a competitive base salary of $160-200k, company bonus, 100% 401K match up to 5%, comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being. We also offer competitive annual leave, parental leave, PTO, and observed holidays, as well as continuous training and development opportunities.</p>\n<p>If you are a motivated and results-driven individual who is passionate about making data easier, we encourage you to apply for this exciting opportunity.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Uncover customer pain points and design tailored solutions</li>\n<li>Own the proof of concept lifecycle</li>\n<li>Work closely with sales directors to shape opportunity strategy</li>\n<li>Cultivate trust as the go-to advisor to clients</li>\n<li>Craft compelling value engineering strategies</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Extensive experience in technology as a bridge between business and technical users/stakeholders in a commercially focused technical sales role</li>\n<li>Experience working with large banks or financial institutions, including familiarity with buying cycles in regulated environments</li>\n<li>Excellent communication and presentation skills</li>\n<li>Ability to travel across North America (25-40%) depending on customer needs and location</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience working with scale-up companies</li>\n<li>Experience with Quantexa&#39;s partner ecosystem (AWS, Azure, GCP, Databricks, Elasticsearch)</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive base salary of $160-200k</li>\n<li>Company bonus</li>\n<li>100% 401K match up to 5%</li>\n<li>Comprehensive benefits coverage, including mental health support, fitness reimbursements, and financial well-being</li>\n<li>Competitive annual leave, parental leave, PTO, and observed holidays</li>\n<li>Continuous training and development opportunities</li>\n<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ed5984b-196","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/kGwgU4AAZnoS6jvFGxQzM4/hybrid-solutions-engineer-(pre-sales)---banking-in-new-york-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160-200k","x-skills-required":["Extensive experience in technology as a bridge between business and technical users/stakeholders in a commercially focused technical sales role","Experience working with large banks or financial institutions, including familiarity with buying cycles in regulated environments","Excellent communication and presentation skills","Ability to travel across North America (25-40%) depending on customer needs and location"],"x-skills-preferred":["Experience working with scale-up companies","Experience with Quantexa's partner ecosystem (AWS, Azure, GCP, Databricks, Elasticsearch)"],"datePosted":"2026-03-09T16:49:39.647Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Extensive experience in technology as a bridge between business and technical users/stakeholders in a commercially focused technical sales role, Experience working with large banks or financial institutions, including familiarity with buying cycles in regulated environments, Excellent communication and presentation skills, Ability to travel across North America (25-40%) depending on customer needs and location, Experience working with scale-up companies, Experience with Quantexa's partner ecosystem (AWS, Azure, GCP, Databricks, Elasticsearch)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5093878d-5f9"},"title":"C# Application Engineer, Associate","description":"<p>At BlackRock, technology is central to our mission, and our team continues to drive innovation across the industry. We value curiosity, collaboration, and a willingness to experiment to tackle complex problems. As an Associate, Software Engineer on the Enterprise Data Solutions data team, you&#39;ll join an engineering pod responsible for third-party integrations, data feeds, APIs, and incorporating Preqin data into the BlackRock Enterprise Data Platform. Your work will focus on building robust, scalable systems that align with business objectives. You&#39;ll deliver high-quality solutions by applying strong data expertise, product insight, and effective communication skills. Collaboration with other engineers, product managers, and data owners will be key as you help design, develop, and launch new features and influence our technical strategy.</p>\n<p>Key responsibilities will include:</p>\n<ul>\n<li>Designing, implementing, and maintaining robust systems for data distribution to customers and third-party integrations.</li>\n<li>Collaborating closely with engineering teams across the organisation to ensure adoption of optimal technical solutions and raising development standards through knowledge sharing and best practice implementation.</li>\n<li>Actively contributing to technical discussions regarding new product directions, data modelling, and architectural decisions to ensure the technology platform remains scalable and adaptable.</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li>4+ years’ experience in software engineering.</li>\n<li>Strong technical ability across the full stack: C#, Python, FastAPI, React and Typescript are a plus.</li>\n<li>Experience with PostgreSQL, MongoDB and other SQL and NoSQL databases (AWS Aurora, Azure Cosmos DB, MS SQL Server, Cassandra are a plus).</li>\n<li>Experience with Data Warehouse systems, particularly Snowflake (DataBricks is a plus).</li>\n<li>Experience of working within cloud provider services – Azure or AWS and utilization of infrastructure as code (Terraform).</li>\n<li>Familiarity with containerisation – Docker and Kubernetes.</li>\n<li>Excel plugin development experience is a plus.</li>\n<li>Excellent verbal and written communication and interpersonal skills.</li>\n<li>A “let’s do it” and “challenge accepted” attitude when faced with challenging tasks, and willingness to learn new technologies and ways of working.</li>\n</ul>\n<p>Our benefits include retirement investment, education reimbursement, comprehensive resources to support physical health and emotional well-being, family support programs, and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5093878d-5f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/eqw6vaVybvYbGaFNw8hUeY/c%23%2C-application-engineer%2C-associate-in-london-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#","Python","FastAPI","React","Typescript","PostgreSQL","MongoDB","AWS Aurora","Azure Cosmos DB","MS SQL Server","Cassandra","Snowflake","DataBricks","Terraform","Docker","Kubernetes","Excel plugin development"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:45:21.927Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C#, Python, FastAPI, React, Typescript, PostgreSQL, MongoDB, AWS Aurora, Azure Cosmos DB, MS SQL Server, Cassandra, Snowflake, DataBricks, Terraform, Docker, Kubernetes, Excel plugin development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_647aab1e-986"},"title":"BackEnd Engineer: Experiments","description":"<p><strong>About the Role</strong></p>\n<p>This is a backend role (Python services + data-heavy systems) where you&#39;ll build and maintain services for experiment assignment, logging, and report generation. You&#39;ll improve scalability, performance, and reliability of experiment reporting pipelines, add product-facing features to the UI, and take technical ownership of projects.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and maintain backend services for experiment assignment, logging, and report generation</li>\n<li>Improve scalability, performance, and reliability of experiment reporting pipelines</li>\n<li>Add product-facing features to the UI to help users launch experiments and interpret results</li>\n<li>Take technical ownership of projects: shape solutions, break down work, and drive execution with the team</li>\n<li>Participate in a weekly on-call rotation (investigating occasional issues and answering internal questions)</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong experience building and owning production backend services in Python</li>\n<li>Experience with monitoring, alerting, and debugging user-facing systems</li>\n<li>Strong SQL skills and confidence working with data-heavy systems (metrics, logging, analytics)</li>\n<li>Ability to ship maintainable software (tests, code review, incremental delivery)</li>\n</ul>\n<p><strong>Nice-to-have</strong></p>\n<ul>\n<li>Experience with experimentation systems, metrics platforms, or A/B testing workflows</li>\n<li>Experience with PySpark and/or Databricks</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time</li>\n<li>Fully remote team</li>\n<li>Work from home stipend</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_647aab1e-986","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/9C931E93D0","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["Python","FastAPI","PostgreSQL","Plotly Dash","PySpark/Databricks","AWS","CloudWatch","Sentry"],"x-skills-preferred":["experimentation systems","metrics platforms","A/B testing workflows","PySpark","Databricks"],"datePosted":"2026-03-09T10:58:22.242Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, FastAPI, PostgreSQL, Plotly Dash, PySpark/Databricks, AWS, CloudWatch, Sentry, experimentation systems, metrics platforms, A/B testing workflows, PySpark, Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bb7bb8e9-e31"},"title":"Data Engineer - 12 Month TFT","description":"<p>We&#39;re looking for an experienced Data Engineer to join our team at Electronic Arts. As a Data Engineer, you will collaborate with the Marketing team to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our business.</p>\n<p>You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also stay informed of the latest trends and research on all aspects of data engineering and analytics.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, implement and maintain efficient, scalable and robust data pipelines using cloud-native and open-source technologies</li>\n<li>Develop and optimize ETL/ELT processes to ingest, transform, and deliver data from diverse sources</li>\n<li>Automate deployment and monitoring of data workflows using CI/CD best practices</li>\n<li>Guide communications between our users and studio engineers to provide scalable end-to-end solutions</li>\n<li>Promote strategies to improve our data modelling, quality and architecture</li>\n<li>Participate in code reviews, mentor junior engineers, and contribute to team knowledge sharing</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>\n<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>\n<li>Experience in data modelling and tools such as dbt, ETL processes, and data warehousing</li>\n<li>Experience with at least one of the programming languages like Python, Java</li>\n<li>Experience with version control and code review tools such as Git</li>\n<li>Knowledge of latest data pipeline orchestration tools such as Airflow</li>\n<li>Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools (e.g., Docker, Terraform, CloudFormation)</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience in gaming and working with its telemetry data or data from similar sources</li>\n<li>Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark, Iceberg</li>\n<li>Experience in developing engineering solutions based on near real-time/streaming dataset</li>\n<li>Exposure to AI/ML, MLOps concepts and collaboration with data science or AI teams.</li>\n</ul>\n<p>Pay Transparency - North America</p>\n<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>\n<p>Pay Ranges: $100,000 - $139,500 CAD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bb7bb8e9-e31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-12-month-TFT/212451","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"temporary","x-salary-range":"$100,000 - $139,500 CAD","x-skills-required":["SQL","cloud-based databases","data modelling","ETL processes","data warehousing","Python","Java","Git","Airflow","cloud platforms","infrastructure-as-code tools"],"x-skills-preferred":["gaming telemetry data","big data platforms","EMR","Databricks","Kafka","Spark","Iceberg","AI/ML","MLOps"],"datePosted":"2026-03-09T10:58:20.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"TEMPORARY","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, Java, Git, Airflow, cloud platforms, infrastructure-as-code tools, gaming telemetry data, big data platforms, EMR, Databricks, Kafka, Spark, Iceberg, AI/ML, MLOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":139500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbafe9f0-ca5"},"title":"Merchant Intelligence Team Member","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a skilled Merchant Intelligence Team Member to join our team at Constructor. As a key member of our Merchant Intelligence team, you will play a critical role in helping merchandizers achieve their ecommerce goals, increasing satisfaction with their sites and retention of customers.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Enhance the dashboard experience for merchandizers by building analytics that provide actionable insights to improve e-commerce KPIs.</li>\n<li>Perform data exploration and research user behavior.</li>\n<li>Implement end-to-end data pipelines to support real-time analytics for essential business metrics.</li>\n<li>Take part in product research and development, iterate with prototypes and customer product interviews.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You are proficient in BI tools (both data analysis and building dashboards for engineers and non-technical folks).</li>\n<li>You are keen to not just prepare more data, but to think critically about how that data can deliver valuable insights and drive impactful decisions for non-technical end users.</li>\n<li>You are an excellent communicator with the ability to translate business requests into technical language and vice versa.</li>\n<li>You are excited to leverage massive amounts of data to drive product innovation and deliver business value.</li>\n<li>You are proficient at SQL (any variant) and well-versed in exploratory data analysis with Python (pandas &amp; numpy, data visualization libraries)</li>\n<li>Practical familiarity with the big data stack (Spark, Hive, Databricks).</li>\n<li>You are adept at fast prototyping and providing analytical support for initiatives in the e-commerce space by identifying and focusing on relevant features and metrics.</li>\n<li>You are willing to develop and maintain effective communication tools to report business performance and inform decision-making at a cross-functional level.</li>\n<li>You get excited to hear feedback from customers and figure out potential solutions.</li>\n<li>You&#39;re familiar with math statistics.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>\n<li>Fully remote team - choose where you live</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options - offered in addition to the base salary</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p><strong>Diversity, Equity, and Inclusion at Constructor</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbafe9f0-ca5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/84CF443B3D","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["BI tools","SQL","Python","pandas & numpy","data visualization libraries","Spark","Hive","Databricks"],"x-skills-preferred":["math statistics"],"datePosted":"2026-03-09T10:57:38.303Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"BI tools, SQL, Python, pandas & numpy, data visualization libraries, Spark, Hive, Databricks, math statistics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_863d1fd7-9ed"},"title":"Business Intelligence Data Analyst","description":"<p>We are seeking a Business Intelligence Data Analyst to join our team. As a Business Intelligence Data Analyst, you will become a subject matter expert for BI (data &amp; analytics) on all intelligence-related tasks. You will work independently and in a team environment performing requirements gathering, data analysis, process mapping, test case definition, development, and collaboration across multiple business units and projects.</p>\n<p><strong>Role Overview</strong></p>\n<p>The Business Intelligence Data Analyst will be responsible for developing and implementing interactive analytic reports and dashboards. You will translate business requirements into ETL and report specifications, analyze and document complex business processes, and develop and implement ETL processes, reports, and queries in support of business analytics.</p>\n<p><strong>Skills Required</strong></p>\n<ul>\n<li>5+ years&#39; experience with Power BI</li>\n<li>Working experience with Microsoft Office 365</li>\n<li>Working experience with Microsoft Power Platform</li>\n<li>Working experience with Google Suite and App Script</li>\n<li>Proficiency with DAX and M languages</li>\n<li>Proficiency in SQL Queries</li>\n<li>Excellent written and oral communication skills</li>\n<li>Excellent time management and organizational skills</li>\n<li>Detail oriented but able to understand the big picture</li>\n<li>Experience communicating with business users</li>\n<li>Bachelor&#39;s degree in computer science, Information Technology, or related field. Relevant work experience will also be considered</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Our employees are our most valuable resource; therefore, we provide them with a competitive compensation package commensurate with skills and experience, excellent benefits, high level of job satisfaction and a casual and fun work environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_863d1fd7-9ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Keywords Studios","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/6FC2E51793","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"permanent","x-salary-range":null,"x-skills-required":["Power BI","Microsoft Office 365","Microsoft Power Platform","Google Suite","DAX","M languages","SQL Queries"],"x-skills-preferred":["Python Programming","Statistical computing experience","Cloud Platforms – AWS/Azure, Snowflake, Databricks","DevOps implementation practices","Scrum methodology delivery experience"],"datePosted":"2026-03-09T10:56:36.614Z","jobLocationType":"TELECOMMUTE","occupationalCategory":"Engineering","industry":"Technology","skills":"Power BI, Microsoft Office 365, Microsoft Power Platform, Google Suite, DAX, M languages, SQL Queries, Python Programming, Statistical computing experience, Cloud Platforms – AWS/Azure, Snowflake, Databricks, DevOps implementation practices, Scrum methodology delivery experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_42c9cfa4-8e3"},"title":"Data Platform Engineer","description":"<p><strong>Data Platform Engineer</strong></p>\n<p>Join AVL and make a direct impact on shaping the future of Data, AI, and Mobility.</p>\n<p><strong>Your Responsibilities:</strong></p>\n<ul>\n<li>Review and stabilise existing platform implementations (Databricks, Foundry – pipelines, Ontology schemas, Workshop applications, Functions, notebooks).</li>\n<li>Identify performance bottlenecks, technical debt, and governance gaps across data pipelines and application layers.</li>\n<li>Lead Ontology governance and design reviews, acting as a gatekeeper for all schema changes (Object Types, Links, Properties, Actions).</li>\n<li>Define and document target data architectures (ingestion, transformation, and consumption layers).</li>\n<li>Establish coding standards, naming conventions, repository structures, and Function versioning policies.</li>\n<li>Enforce code reviews and technical validation before production deployment through Foundry Branching and Proposal workflows.</li>\n<li>Define and implement a structured testing strategy (unit tests for Functions, integration tests, data quality checks, pipeline expectations).</li>\n<li>Design and improve CI/CD pipelines and Dev/Test/Prod promotion processes using Foundry Marketplace/DevOps.</li>\n<li>Automate deployments, rollbacks, and environment configurations.</li>\n<li>Create and maintain architecture documentation (ADRs, data lineage diagrams, Ontology schemas, data flow diagrams).</li>\n<li>Design reusable Workshop component libraries, custom widgets, and Slate application patterns.</li>\n<li>Design and validate new platform solutions aligned with strategy, security, and governance requirements.</li>\n<li>Mentor the development team on architectural thinking and platform best practices (40% hands-on coding, 60% architecture/leadership).</li>\n</ul>\n<p><strong>Your Profile:</strong></p>\n<ul>\n<li>Master’s degree in Computer Science, Data Engineering, or a related field.</li>\n<li>5+ years of experience in data engineering or platform architecture roles.</li>\n<li>Strong expertise in modern data platforms (Databricks, Snowflake, AWS Glue, Azure Synapse, or similar). Foundry experience is strongly preferred but not required.</li>\n<li>Advanced skills in Python (PySpark), SQL (Spark SQL), and TypeScript for backend logic and application development.</li>\n<li>Experience with distributed data processing (Spark architecture, partitioning strategies, performance optimisation).</li>\n<li>Strong understanding of relational databases (PostgreSQL, Oracle, or similar).</li>\n<li>Experience with CI/CD workflows, Git branching strategies, and automated testing in data environments.</li>\n<li>Solid experience in end-to-end ETL and data transformation processes.</li>\n<li>Proven experience in performance optimisation and scalable architecture design.</li>\n<li>Experience in defining development standards, interface contracts, and engineering best practices.</li>\n<li>Hands-on coding mindset: must write production code daily, not only review or document.</li>\n<li>Structured, analytical, and documentation-oriented approach.</li>\n<li>Strong communication and technical leadership skills, with very good proficiency in English and French.</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>\n<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>\n<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>\n<li>A team where your input and technical decisions truly matter.</li>\n<li>A competitive package and benefits.</li>\n</ul>\n<p><strong>How to Apply:</strong></p>\n<p>If you have these qualifications and are looking for a new challenge, we encourage you to apply to discuss it further!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_42c9cfa4-8e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"AVL","sameAs":"https://jobs.avl.com","logo":"https://logos.yubhub.co/jobs.avl.com.png"},"x-apply-url":"https://jobs.avl.com/job/Sala-Al-Jadida-Data-Platform-Engineer/1365823133/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks","Foundry","Python","SQL","TypeScript","Spark","PostgreSQL","CI/CD","Git","ETL","performance optimisation","scalable architecture design"],"x-skills-preferred":["cloud-native ML","MLOps"],"datePosted":"2026-03-09T09:29:56.618Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sala Al Jadida, MA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Databricks, Foundry, Python, SQL, TypeScript, Spark, PostgreSQL, CI/CD, Git, ETL, performance optimisation, scalable architecture design, cloud-native ML, MLOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4eae45a-3da"},"title":"Abuse Investigator - Child Safety","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.</p>\n<p>The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Child Safety Investigator on the Intelligence &amp; Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies.</p>\n<p>This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting.</p>\n<p>You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Investigate high-severity child safety violations and disrupt malicious actors in partnership with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including through cross-platform and cross-internet research</li>\n</ul>\n<ul>\n<li>Support investigations across other high-risk harm areas where child safety concerns intersect</li>\n</ul>\n<ul>\n<li>Conduct open-source and cross-platform research to contextualize actors and abuse networks</li>\n</ul>\n<ul>\n<li>Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python</li>\n</ul>\n<ul>\n<li>Communicate investigation findings clearly and effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries</li>\n</ul>\n<ul>\n<li>Develop a deep, working understanding of OpenAI’s products, internal data systems, and enforcement mechanisms</li>\n</ul>\n<ul>\n<li>Collaborate with engineering and data partners to improve investigative tooling, data quality, and analyst workflows</li>\n</ul>\n<ul>\n<li>Support time-sensitive escalations and high-priority investigations requiring rapid analysis and sound judgment</li>\n</ul>\n<ul>\n<li>Represent investigative findings and work externally with the press, governments, NGOs, and law enforcement agencies</li>\n</ul>\n<ul>\n<li>Participate in a rotating on-call schedule to support timely response to high-priority safety incidents and sensitive investigations</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep expertise in online child safety and child exploitation threats</li>\n</ul>\n<ul>\n<li>Have familiarity or proficiency with technical investigations, especially using SQL, Python, Notebooks and scripts in a government, law enforcement and/or tech-company setting</li>\n</ul>\n<ul>\n<li>Speak one or more languages in addition to English</li>\n</ul>\n<ul>\n<li>Have at least 5+ years of experience tracking threat actors in abuse domains</li>\n</ul>\n<ul>\n<li>Have worked on time-sensitive escalations involving high-risk harm</li>\n</ul>\n<ul>\n<li>Have presented analytic findings to senior stakeholders or external partners</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4eae45a-3da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/19b9af1a-6a6e-42e3-824b-a9f3794fef2b","x-work-arrangement":"Hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$158.4K – $425K","x-skills-required":["SQL","Python","Databricks","Notebooks","Scripts"],"x-skills-preferred":["Language models","Technical investigations","Child safety and child exploitation threats"],"datePosted":"2026-03-08T22:16:05.502Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Databricks, Notebooks, Scripts, Language models, Technical investigations, Child safety and child exploitation threats","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":158400,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015e5c6d-a31"},"title":"Senior Data Engineer","description":"<p><strong>Why Valvoline Global Operations?</strong></p>\n<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>\n<p><strong>Job Purpose</strong></p>\n<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>\n<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>\n<ul>\n<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>\n<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>\n<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>\n<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>\n<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>\n<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>\n</ul>\n<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>\n<li>5-7+ years of experience in a data engineering or related role.</li>\n<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>\n<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>\n<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>\n<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>\n</ul>\n<p><strong>Competencies Desired</strong></p>\n<ul>\n<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>\n<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>\n<li>SAP BW or SAP HANA experience is a plus.</li>\n<li>Experience with BI tools like Power BI or Tableau.</li>\n<li>Understanding of data governance frameworks and data security best practices.</li>\n<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>\n<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>\n</ul>\n<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>\n<ul>\n<li>Normal Office environment.</li>\n<li>Prolonged periods of computer use and frequent participation in meetings</li>\n<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>\n</ul>\n<ul>\n<li>Minimal travel required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015e5c6d-a31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Valvoline Global Operations","sameAs":"https://jobs.valvolineglobal.com","logo":"https://logos.yubhub.co/jobs.valvolineglobal.com.png"},"x-apply-url":"https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","Databricks","SAP Datasphere","SQL","Python","Scala","cloud platforms","data orchestration tools","CI/CD pipelines"],"x-skills-preferred":["Databricks","SAP Datasphere","SAP BW","SAP HANA","Power BI","Tableau","data governance frameworks","data security best practices","data lakehouse architecture","real-time streaming data pipelines"],"datePosted":"2026-03-08T22:14:37.507Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a14c543b-08b"},"title":"Partner Strategy & Operations","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Partner Strategy, Operations &amp; Programs</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$239K – $265K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to build safe artificial general intelligence (AGI) that benefits all of humanity. This long-term undertaking brings the world’s best scientists, engineers, and business professionals into one lab to accomplish it.</p>\n<p>OpenAI’s Go-To-Market (GTM) Partnerships team builds a strategic, global partner ecosystem designed to accelerate customer success, enable AI adoption, and drive growth in support of our mission. We collaborate closely across internal teams to ensure unified strategy and seamless execution.</p>\n<p><strong>About the Role</strong></p>\n<p>As part of the Partner Strategy, Operations &amp; Programs team, this role will serve as a trusted advisor to GTM Partnerships leadership. You will drive core operating rhythms and lead high-impact projects that strengthen how we define and measure partner performance, shape targets, and report progress. You’ll collaborate cross-functionally with Revenue Operations, Finance, Data, Growth, and Systems teams to align definitions and reporting, drive efficiency, and help Partner Directors, Partner Technical Success, and our partner ecosystem execute at scale.</p>\n<p>We’re looking for someone who can pair sharp analytical judgment with strong operational instincts, and who can lead cross-functional work that improves how the business runs.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Build and drive the Partnerships operating cadence and collaborate with Revenue Operations to bring partner performance into core GTM rhythms (forecasting, pipeline, business reviews)</li>\n</ul>\n<ul>\n<li>Define, build, and maintain partner performance reporting (KPIs, dashboards, scorecards, deep dives) that leaders and field teams use to manage performance and drive decisions, and build or extend select views into partner-facing reporting on partner platforms</li>\n</ul>\n<ul>\n<li>Partner with GTM Planning and Strategic Finance to shape targets, and own target allocation and rollout so partner teams have clear ownership and understand how success is measured</li>\n</ul>\n<ul>\n<li>Own and evolve the partnerships data and attribution model, driving the process and system changes needed to accurately capture partner impact across the customer lifecycle</li>\n</ul>\n<ul>\n<li>Lead strategic, cross-functional initiatives that improve efficiency and effectiveness across Partnerships and the broader GTM organization, including driving automation that reduces manual overhead</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>7+ years of experience in GTM Strategy &amp; Operations at a high-growth technology company, with meaningful exposure driving partner / channel / ecosystem motions</li>\n</ul>\n<ul>\n<li>Proficiency in Salesforce, data analysis in Sheets/Excel, querying and validating data using SQL (e.g., in Databricks), and building dashboards in BI tools (e.g., Tableau)</li>\n</ul>\n<ul>\n<li>Deep understanding of partnerships data models and KPIs, including how partner impact is tracked and reported across deals</li>\n</ul>\n<ul>\n<li>A track record supporting and driving GTM operating cadences (forecasting, pipeline reviews, business reviews) with cross-functional stakeholders and partnerships leadership</li>\n</ul>\n<ul>\n<li>Experience partnering with GTM Planning and Strategic Finance to model targets, owning quota setting for partner teams, and operationalizing coverage models with technical teams</li>\n</ul>\n<ul>\n<li>Exceptional analytical judgment and business acumen, with a track record of solving complex problems through thoughtful analysis and strategic execution</li>\n</ul>\n<ul>\n<li>Strong communication and project management skills, with the ability to engage, influence, and align cross-functional teams</li>\n</ul>\n<ul>\n<li>Ability to operate autonomously and maintain momentum in a rapidly evolving environment</li>\n</ul>\n<ul>\n<li>Strong organizational skills, with the discipline to prioritize and manage multiple high-impact projects simultaneously</li>\n</ul>\n<ul>\n<li>Understanding of the AI landscape and how AI solutions solve real-world customer problems</li>\n</ul>\n<p><strong>Experience Level</strong></p>\n<p>Senior</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Workplace Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Category</strong></p>\n<p>Engineering</p>\n<p><strong>Industry</strong></p>\n<p>Technology</p>\n<p><strong>Salary Range</strong></p>\n<p>$239K – $265K</p>\n<p><strong>Required Skills</strong></p>\n<ul>\n<li>Salesforce</li>\n<li>Data analysis in Sheets/Excel</li>\n<li>SQL (e.g., in Databricks)</li>\n<li>BI tools (e.g., Tableau)</li>\n<li>Partnerships data models and KPIs</li>\n<li>GTM operating cadences</li>\n<li>Strategic Finance</li>\n<li>Technical teams</li>\n<li>AI landscape</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>GTM Strategy &amp; Operations</li>\n<li>High-growth technology company</li>\n<li>Partner / channel / ecosystem motions</li>\n<li>Business acumen</li>\n<li>Analytical judgment</li>\n<li>Project management</li>\n<li>Cross-functional teams</li>\n<li>Autonomy</li>\n<li>Organizational skills</li>\n<li>AI solutions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a14c543b-08b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3fd0ca32-e787-437d-985a-b9f7e3299ef4","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$239K – $265K","x-skills-required":["Salesforce","Data analysis in Sheets/Excel","SQL (e.g., in Databricks)","BI tools (e.g., Tableau)","Partnerships data models and KPIs","GTM operating cadences","Strategic Finance","Technical teams","AI landscape"],"x-skills-preferred":["GTM Strategy & Operations","High-growth technology company","Partner / channel / ecosystem motions","Business acumen","Analytical judgment","Project management","Cross-functional teams","Autonomy","Organizational skills","AI solutions"],"datePosted":"2026-03-06T18:40:20.397Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"engineering","industry":"technology","skills":"Salesforce, Data analysis in Sheets/Excel, SQL (e.g., in Databricks), BI tools (e.g., Tableau), Partnerships data models and KPIs, GTM operating cadences, Strategic Finance, Technical teams, AI landscape, GTM Strategy & Operations, High-growth technology company, Partner / channel / ecosystem motions, Business acumen, Analytical judgment, Project management, Cross-functional teams, Autonomy, Organizational skills, AI solutions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":239000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e9e336c5-ad3"},"title":"Software Engineer, Privacy Infrastructure","description":"<p><strong>Software Engineer, Privacy Infrastructure</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Security</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Privacy Engineering team sits at the intersection of Security, Privacy, Legal, and Core Infrastructure. Our mission is to build data infrastructure and systems to support our privacy, legal, and security teams—securely, quickly, and at scale. Our guiding principles include: defensibility by default, enabling researchers, preparing for future transformative technologies, and building a robust security culture.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a Software Engineer who can design and operate technical systems that support legal compliance workflows, including secure data processing and document review. You’ll partner daily with Legal, Security, IT, and partner engineering teams to turn legal processes into concrete technical workflows. This role is ideal for an engineer who loves large-scale data problems and understands the rigor required when the results may be scrutinized.</p>\n<p>This position is located in San Francisco. Relocation assistance is available.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and operate data storage pipelines that can operate at scale.</li>\n</ul>\n<ul>\n<li>Build search &amp; discovery services (e.g., Spark/Databricks, index layers, metadata catalogs) based on the needs of partner teams.</li>\n</ul>\n<ul>\n<li>Automate secure data transfers—encrypting, checksumming, and auditing exports to reviewers.</li>\n</ul>\n<ul>\n<li>Stand up locked-down compute environments that balance usability with security controls.</li>\n</ul>\n<ul>\n<li>Instrument monitoring and KPIs that maintain accountability of data holds and productions.</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally to codify SOPs, threat models, and chain-of-custody documentation that withstand scrutiny.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have hands-on experience building or operating large-scale data-lake or backup systems (Azure, AWS, GCP).</li>\n</ul>\n<ul>\n<li>Know your way around Terraform or Pulumi, CI/CD, and can turn ad-hoc legal requests into repeatable pipelines.</li>\n</ul>\n<ul>\n<li>Comfortable working with discovery workflows (legal holds, enterprise document collections, secure review) or eager to build expertise quickly.</li>\n</ul>\n<ul>\n<li>Able to communicate technical concepts — from storage governance to block-ID APIs — clearly to teams such as Legal, Engineering, and others.</li>\n</ul>\n<ul>\n<li>Have shipped secure solutions that balance speed, cost, and evidentiary defensibility—and can articulate the trade-offs.</li>\n</ul>\n<ul>\n<li>Communicate crisply, document rigorously, and enjoy working across disciplines under tight deadlines.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e9e336c5-ad3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/07153f7c-7e8b-4283-a879-cb07a224e083","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $325K • Offers Equity","x-skills-required":["Terraform","Pulumi","CI/CD","Spark/Databricks","index layers","metadata catalogs","Azure","AWS","GCP","large-scale data-lake or backup systems","secure data transfers","compute environments","monitoring and KPIs","SOPs","threat models","chain-of-custody documentation"],"x-skills-preferred":["hands-on experience building or operating large-scale data-lake or backup systems","comfortable working with discovery workflows","able to communicate technical concepts clearly to teams such as Legal, Engineering, and others","have shipped secure solutions that balance speed, cost, and evidentiary defensibility"],"datePosted":"2026-03-06T18:29:40.108Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, Pulumi, CI/CD, Spark/Databricks, index layers, metadata catalogs, Azure, AWS, GCP, large-scale data-lake or backup systems, secure data transfers, compute environments, monitoring and KPIs, SOPs, threat models, chain-of-custody documentation, hands-on experience building or operating large-scale data-lake or backup systems, comfortable working with discovery workflows, able to communicate technical concepts clearly to teams such as Legal, Engineering, and others, have shipped secure solutions that balance speed, cost, and evidentiary defensibility","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52a702d2-d7b"},"title":"Sales Excellence Manager(Ads)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft Advertising are looking for a talented Sales Excellence Manager(Ads) at their Beijing office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising the online advertising ecosystem. You&#39;ll work directly with leadership to shape the company&#39;s direction in the digital marketing space.</p>\n<p><strong>About the Role</strong></p>\n<p>In this role, you will be a trusted advisor to the Regional Vice President of Sales and other Sales Directors. You will need to understand all aspects of their business, including the industry and competitive landscape in their verticals, the playbook for winning market and mindshare, and the resources needed to drive this plan. This role enables Sales with programs, tools, processes, and insights as a competitive differentiator and driver of growth. This opportunity will allow you to balance leading, doing, and influencing, in close partnership with a matrixed partner set.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Drives sales growth through business and portfolio planning.</li>\n<li>Enables sales teams to execute strategies for opportunity management (e.g., up-sell, cross-sell, renewal, recapture).</li>\n<li>Coaches and builds relationships with sales team on executing key priorities.</li>\n<li>Contributes to analytics on key revenue drivers (e.g., by vertical/product/geo).</li>\n<li>Instills process discipline, consistency, adherence to standards and excellence in execution, and shares best practices within their team.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>8+ years’ experience in sales, sales operations/management, account management, program management, business development, marketing, consulting, or a related field.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficient SQL, Excel, PowerPoint, PBI, Databricks and other data analysis tools.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong understanding of the online advertising ecosystem.</li>\n<li>Proficient English and Chinese language skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and bonus structure.</li>\n<li>Comprehensive health and wellbeing benefits.</li>\n<li>Professional development opportunities.</li>\n<li>Financial benefits (bonus, equity, pension, etc.).</li>\n<li>Cultural perks (team events, office amenities, etc.).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52a702d2-d7b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/sales-excellence-managerads/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and bonus structure.","x-skills-required":["Sales","Sales Operations","Account Management","Program Management","Business Development","Marketing","Consulting"],"x-skills-preferred":["SQL","Excel","PowerPoint","PBI","Databricks","Data Analysis"],"datePosted":"2026-03-06T07:29:33.345Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Beijing"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Sales, Sales Operations, Account Management, Program Management, Business Development, Marketing, Consulting, SQL, Excel, PowerPoint, PBI, Databricks, Data Analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a4ac455a-c22"},"title":"Senior Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Applied Scientist at their Redmond office. This role sits at the heart of strategic decision-making, transforming complex data into actionable insights for a company that&#39;s revolutionising the technology industry. You&#39;ll work directly with leadership to shape the company&#39;s direction in the development of large-scale, Azure-based intelligence platforms.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Applied Scientist, you will lead the development of machine learning solutions leveraging SOTA technologies in GenAI to build predictive models for generating recommendations, detecting anomalies, generating automated insights with reasoning, and ensuring the platform delivers accurate, actionable intelligence at scale. You will drive experimentation and validation of models, mentor junior scientists, and contribute to model governance and Responsible AI practices.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Lead the design and implementation of machine learning models for recommendations, anomaly detection, and actionable insights.</li>\n<li>Drive experimentation and validation of models.</li>\n<li>Mentor junior scientists and contribute to model governance and Responsible AI practices.</li>\n<li>Partner with engineering and BI teams to operationalize insights into dashboards and alerting systems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years related experience (e.g., statistics predictive analytics, research) OR Master&#39;s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research).</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in programming for data science (e.g. using Python or R for data analysis and modeling) and experience with data querying languages (e.g. SQL).</li>\n<li>Big Data &amp; Distributed Computing: Hands-on experience with large-scale data processing using tools like Apache Spark or Azure Databricks for training and inference workflows.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and collaboration skills.</li>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a4ac455a-c22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-25/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["machine learning","statistics","data science","programming","data querying languages"],"x-skills-preferred":["big data","distributed computing","Apache Spark","Azure Databricks"],"datePosted":"2026-03-06T07:29:17.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, data science, programming, data querying languages, big data, distributed computing, Apache Spark, Azure Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}}]}