<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>5ceb4835-0f1</externalid>
      <Title>Manager, Professional Services</Title>
      <Description><![CDATA[<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>
<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>
<li>6+ years of experience working on Big Data Architectures independently.</li>
<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>
<li>Experience working on Databricks platform is a plus.</li>
<li>Documentation and white-boarding skills.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8503068002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a7d0cf0f-a3a</externalid>
      <Title>Senior Engineer- Data Platforms</Title>
      <Description><![CDATA[<p>The Data Platform Team serves as the experts on managing data infrastructure for CoreWeave. Our data infrastructure includes managed databases, data ingestion, data flow, data lakes, and other data retrieval for CoreWeave and its customers.</p>
<p>We are seeking senior software engineers with specialization in database and stream processing who can help us fulfill the goal of our global datastore strategy and establish communication models for our data flow. This individual will work with a team of mixed skilled engineers and have the opportunity to work on the full range of rewarding challenges that come with the business of building a cloud in a communicative, supportive, and high-performing environment.</p>
<p>As a member of the Data Platform Team you will have the opportunity to:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>
<li>Participate in operations and scaling of relational data platforms</li>
<li>Develop a stream processing architecture and solve for scalability and reliability</li>
<li>Improve the performance, security, reliability, and scalability of our data platforms and related services, and participate in the team’s on-call rotation</li>
<li>Establish guidelines and guard rails for data access and storage for stakeholder teams</li>
<li>Ensure compliance with standards for data protection regulation</li>
<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself</li>
</ul>
<p>The ideal candidate will have 5+ years of experience in a software or infrastructure engineering industry, with experience operating services in production and at scale and familiarity with reliability engineering concepts such as different types of testing, progressive deployments, error budgets, observability, and fault-tolerant design.</p>
<p>The base salary range for this role is $175,000 to $210,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 to $210,000</Salaryrange>
      <Skills>database and stream processing, managed databases, data ingestion, data flow, data lakes, APIs, operational experience, reliability engineering, testing, progressive deployments, error budgets, observability, fault-tolerant design, Kubernetes, Go, Linux distributions, shell scripting, Linux storage and networking stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI degradation. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4562276006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>03807164-210</externalid>
      <Title>Resident Solutions Architect</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the Manager, Professional Services.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical Big Data projects which may include building reference architectures, how-to&#39;s and production grade MVPs</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>
</ul>
<ul>
<li>6+ years of experience working on Big Data Architectures independently</li>
</ul>
<ul>
<li>Strong experience working in the Databricks ecosystem</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala.</li>
</ul>
<ul>
<li>Experience working across Cloud Platforms (GCP / AWS / Azure)</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Build skills in technical areas that support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, GCP, AWS, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8081658002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e7613e05-073</externalid>
      <Title>Customer Enablement Specialist</Title>
      <Description><![CDATA[<p>Job Title: Customer Enablement Specialist</p>
<p>Location: Bellevue, Washington</p>
<p>Department: Education &amp; Training</p>
<p>CSQ227R234</p>
<p><strong>About the Role</strong></p>
<p>This role is required to work in a hybrid office setting in our Bellevue, WA office.</p>
<p><strong>The Opportunity</strong></p>
<p>Databricks runs some of the largest customer enablement programs in the industry , workshops, digital courses, labs, and webinars that reach thousands of users. The Customer Enablement Specialist turns that reach into results. You connect engaged learners to structured training plans that drive product adoption, customer success, and measurable business impact.</p>
<p>This isn’t a sales or business development role , every conversation begins with an existing Databricks user or program participant. Your focus is on helping those customers move from initial interest to tangible capability: skilled teams, completed training milestones, and activated use cases.</p>
<p>You’ll manage a broad portfolio of accounts, supporting new and emerging personas , business users, analysts, and app developers , and helping them succeed with Databricks’ latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Convert participation in Databricks’ scale programs (webinars, workshops, digital learning) into structured training engagements.</li>
</ul>
<ul>
<li>Own a high-volume enablement pipeline , identifying learner needs, recommending tailored paths, and tracking adoption progress.</li>
</ul>
<ul>
<li>Deliver engaging L100–L200 sessions and demos to help new personas understand what’s possible with Databricks.</li>
</ul>
<ul>
<li>Build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</li>
</ul>
<ul>
<li>Partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles.</li>
</ul>
<ul>
<li>Report key metrics , trained accounts, learner growth, conversion rates, and training revenue , using data to guide your priorities.</li>
</ul>
<ul>
<li>Provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>2–4 years in a technical, customer-facing role , technical training, pre-sales, enablement, or customer success preferred.</li>
</ul>
<ul>
<li>Hands-on familiarity with modern data and analytics platforms (Databricks, cloud SQL, BI tools, or data lakes).</li>
</ul>
<ul>
<li>Confidence delivering introductory technical content to non-expert audiences.</li>
</ul>
<ul>
<li>Working knowledge of AI/ML concepts , able to explain how Databricks enables practical use cases.</li>
</ul>
<ul>
<li>Strong communication skills and a consultative approach: discover needs, recommend paths, and gain commitment.</li>
</ul>
<ul>
<li>A data-driven mindset with strong organisational habits and comfort managing many concurrent accounts.</li>
</ul>
<ul>
<li>Team-first attitude , proactive collaborator who knows when to escalate for deeper technical support.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Databricks certifications or willingness to certify (Data Engineer Associate, Databricks certifications (or willingness to obtain within 6 months).</li>
</ul>
<ul>
<li>Background in SaaS, cloud, or data platforms; familiarity with BI or AI/BI tools (Databricks Genie, Tableau, Power BI).</li>
</ul>
<ul>
<li>Exposure to Databricks Apps, REST APIs, or AI agent concepts.</li>
</ul>
<ul>
<li>Experience in a role with enablement or training-related revenue metrics.</li>
</ul>
<p><strong>Why This Role, Why Now</strong></p>
<p>New products create new skill gaps. As Databricks expands into AI/BI, Databricks Apps, and agent-based development, a new wave of users , business analysts, app builders, domain experts , needs to get skilled up quickly. The depth CEA team focuses on the complex, strategic, and deeply technical. This role focuses on the broad middle: high volume, new personas, and the scale-to-commitment motion that turns digital participation into real adoption. It is a high-visibility, high-impact position with a clear growth path into senior CEA work as you build depth and track record.</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 2 Pay Range $86,600-$119,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$86,600-$119,150 USD</Salaryrange>
      <Skills>data and analytics platforms, cloud SQL, BI tools, data lakes, AI/ML concepts, Databricks Apps, REST APIs, AI agent concepts, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8431935002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6d7f1a0-882</externalid>
      <Title>Resident Solutions Architect - Mumbai</Title>
      <Description><![CDATA[<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>
<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>
<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>
<li>Providing expert-level technical guidance and support to customers during the implementation process</li>
<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>
<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>
<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>
<li>Comfortable writing code in Python, PySpark, and Scala</li>
<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>
<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>
<li>Expertise in Azure</li>
<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>
<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>
<li>Ability to work with Partner Organization and deliver complex programs</li>
<li>Ability to lead large technical delivery teams</li>
<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>
<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>
<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>
<li>Willingness to travel for onsite customer engagements within India</li>
<li>Documentation and white-boarding skills</li>
</ul>
<p>Good-to-have Skills:</p>
<ul>
<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>
<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>
<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>
<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>
<li>Expertise on cloud platforms like AWS and GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8107166002</Applyto>
      <Location>Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ec63ea6-5a3</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>At Neighbor, we&#39;re building the largest hyperlocal marketplace the world has ever seen. As a Data Engineer, you will be the core engineering resource responsible for building, scaling, and optimizing the data infrastructure that transforms raw events into high-fidelity, actionable intelligence.</p>
<p>This engineering resource will be the cornerstone of our data infrastructure, responsible for extraction, transform, and load of the data that powers our nation-wide, best-in-class marketplace. By implementing software engineering best practices and scalable solutions, this role is critical in empowering the CEO, executive team, managers, and individual contributors with the robust and trustworthy intelligence needed to scale and innovate across our marketplace.</p>
<p><strong>Primary Responsibilities</strong></p>
<ul>
<li>Design, implement, and maintain scalable data transformation layers and code-first orchestration frameworks to ensure the delivery of high-fidelity, reusable data models</li>
<li>Design and build robust pipelines to ingest data from diverse sources (APIs, logs, relational DBs)</li>
<li>Ensure the reliable and timely execution of all critical data pipelines (ETLs/ELTs) to maintain data integrity and freshness</li>
<li>Standardize analytics workflows by integrating software engineering best practices, including version control, CI/CD pipelines, and automated data validation protocols</li>
<li>Develop and refine a robust semantic layer to facilitate self-service analytics, enabling stakeholders to derive insights without exposure to underlying architectural complexities</li>
<li>Monitor and optimize cloud compute utilization and data model performance to ensure high availability and low-latency reporting during periods of rapid data scaling</li>
<li>Serve as a strategic technical partner to leadership across Product, Engineering, Marketing, and Finance to align data infrastructure with organizational objectives</li>
<li>Become a subject matter expert on the product ecosystem, user behavior, and marketing life cycles to better translate raw data into business value</li>
<li>Serve as a versatile technical resource capable of stepping into the Data Analyst capacity when necessary,performing deep-dive quantitative analysis and building sophisticated visualizations to support executive decision-making</li>
<li>Mentor the data analytics team on advanced technical methodologies to foster a culture of engineering excellence and data autonomy</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>3+ years of experience in data engineering or analytics engineering</li>
<li>Bachelor&#39;s degree in quantitative and/or technical fields (Math, Physics, Statistics, Economics, Computer Science, Engineering, etc.) OR 5+ years work experience as a Data Engineer</li>
<li>Expert-level mastery of SQL, with the ability to write, tune, and optimize complex queries for high-volume environments</li>
<li>Strong command of at least one major programming language used for data processing</li>
<li>Hands-on experience designing and maintaining data lakes or cloud-based data warehouses</li>
<li>Deep understanding of data integration patterns, including data ingestion, transformation, and automated cleansing (ETL/ELT)</li>
<li>Experience applying scientific, mathematical, or statistical techniques to analyze data and build predictive models</li>
<li>Advanced ability to translate complex datasets into actionable narratives using modern business intelligence and reporting tools</li>
<li>A proven track record of using quantitative analysis to solve ambiguous problems and drive strategic decision-making in a fast-paced environment</li>
<li>Exceptional ability to collaborate with non-technical stakeholders, translating business requirements into technical specs and vice versa</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous Stock options</li>
<li>Medical, dental, and vision insurance</li>
<li>Generous PTO</li>
<li>11 paid company holidays</li>
<li>Hybrid work model - WFH every Monday</li>
<li>401(k) plan</li>
<li>Infant care leave</li>
<li>On-site gym/showers open 24/7</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Programming languages, Data lakes, Cloud-based data warehouses, Data integration patterns, Scientific, mathematical, or statistical techniques, Business intelligence and reporting tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Neighbor</Employername>
      <Employerlogo>https://logos.yubhub.co/neighbor.com.png</Employerlogo>
      <Employerdescription>Neighbor is a marketplace for self storage and parking, operating across almost every U.S. city.</Employerdescription>
      <Employerwebsite>https://neighbor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/neighbor/da1304b7-89ad-4ac0-99e8-9c0cf8284f1c</Applyto>
      <Location>U.S.</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3d849fbc-058</externalid>
      <Title>Member of Product, Data Platform</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>
<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>
<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>
<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>
<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>
<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>
<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>
<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>
<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>
<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>
<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>
<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>
<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>
<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>
<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>
<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>
<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>
<li>Drive company objectives through strategic data platform decisions and initiatives.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>
<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>
<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>
<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>
<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>
<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>
<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>
<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>
<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>
<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>
<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>
</ul>
<p><strong>Although not a requirement, bonus points if you have:</strong></p>
<ul>
<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>
<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>
<li>You have experience supporting clients with data-related issues or concerns.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2a56a653-c18</externalid>
      <Title>Palantir Engineer Specialist - Sr. Consultant - Principal</Title>
      <Description><![CDATA[<p><strong>Palantir Engineer Specialist</strong></p>
<p><strong>Sr. Consultant - Principal</strong></p>
<p><strong>London</strong></p>
<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organisation allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>
<p><strong>About Your Role</strong></p>
<p>As a <strong>Senior Consultant / Principal Consultant – Palantir Engineer</strong>, you lead and deliver end-to-end, data-driven solutions using <strong>Palantir Foundry</strong> in complex client environments. You operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions. You combine strong hands-on technical skills with a consulting mindset, taking ownership of solution design, implementation, and adoption across organisations.</p>
<p><strong>Your role will include:</strong></p>
<ul>
<li>Own the <strong>end-to-end delivery</strong> of Palantir Foundry–based solutions, from problem definition to production</li>
<li>Design and implement <strong>data pipelines and transformations</strong> across diverse data sources</li>
<li>Model data using <strong>Foundry Ontology</strong> concepts to support analytics and operational use cases</li>
<li>Build scalable, reliable solutions using <strong>Python, SQL, and PySpark</strong> within Foundry</li>
<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>
<li>Support <strong>prototyping, productionisation, and scaling</strong> of data-driven applications</li>
<li>Ensure solutions meet requirements for <strong>data quality, governance, security, and performance</strong></li>
<li>Act as a technical advisor within project teams and contribute to best practices</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>What you bring – required</strong></p>
<p><strong>Experience &amp; Seniority</strong></p>
<ul>
<li>Proven experience as a <strong>Senior Consultant or Principal Consultant</strong> in data, analytics, or platform engineering</li>
<li>Strong experience delivering <strong>client-facing data solutions</strong> in complex environments</li>
<li>Ability to take ownership and work independently in ambiguous problem spaces</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong programming skills in <strong>Python</strong> and <strong>SQL</strong>; <strong>PySpark</strong> experience required</li>
<li>Hands-on experience with <strong>Palantir Foundry</strong>, including:</li>
<li>Pipeline Builder / Code Workbook</li>
<li>Data integration and transformation</li>
<li>Ontology modelling and data lineage</li>
<li>Solid understanding of <strong>data architectures</strong>, including data lakes, lakehouses, and data warehouses</li>
<li>Experience working with APIs, databases, and structured / semi-structured data</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience building <strong>scalable ETL/ELT pipelines</strong></li>
<li>Familiarity with <strong>CI/CD concepts</strong>, testing, and production deployments</li>
<li>Strong focus on <strong>solution quality, maintainability, and performance</strong></li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field <strong>or equivalent practical experience</strong></li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with <strong>cloud platforms</strong> (AWS, Azure, GCP)</li>
<li>Familiarity with <strong>containerisation</strong> (Docker, Kubernetes)</li>
<li>Prior experience as a <strong>Palantir FDE</strong> or in Foundry-heavy delivery roles</li>
<li>Domain experience in industries such as <strong>Energy, Finance, Public Sector, Healthcare, or Logistics</strong></li>
</ul>
<p><strong>Benefits</strong></p>
<p><strong>About your team</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognised as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognised by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, Palantir Foundry, Pipeline Builder, Code Workbook, Data integration, Data transformation, Ontology modelling, Data lineage, Data architectures, Data lakes, Lakehouses, Data warehouses, APIs, Databases, Structured data, Semi-structured data, ETL/ELT pipelines, CI/CD concepts, Testing, Production deployments, Solution quality, Maintainability, Performance, Bachelor’s degree, Master’s degree, Computer Science, Engineering, Mathematics, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2A8U1ryerVijb4fFAc6i8u/hybrid-palantir-engineer-specialist---sr.-consultant---principal-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>aafa7b92-fa6</externalid>
      <Title>Senior Consultant - Data Engineering &amp; Data Science (m/w/d)</Title>
      <Description><![CDATA[<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>
<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>
<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>
<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>
<p><strong>Your Tasks</strong></p>
<ul>
<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>
<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>
<li>Set up data pipelines for analytical workloads</li>
<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>
</ul>
<p><strong>What You Bring - Required</strong></p>
<ul>
<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>
<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>
<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>
</ul>
<p><strong>Data Engineering Fundamentals</strong></p>
<ul>
<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>
<li>Solid understanding of data modeling, data transformations, and feature engineering</li>
<li>Experience with cloud-based data platforms, such as:</li>
</ul>
<ol>
<li>Azure, AWS, or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>
</ol>
<ul>
<li>Knowledge of CI/CD concepts and production-ready deployments</li>
</ul>
<p><strong>Applied Data Science &amp; Analytics</strong></p>
<ul>
<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>
<li>Very good programming skills in Python</li>
<li>Very good SQL skills and experience with relational databases</li>
<li>Experience in deploying and productively using ML models</li>
<li>Ability to translate analytical results into business-relevant insights</li>
<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with:</li>
</ul>
<ol>
<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>
<li>Time series analysis, NLP applications, or system modeling</li>
<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>
<li>Docker and Kubernetes</li>
<li>Data visualization tools like Power BI, Tableau</li>
<li>Cloud or architecture certifications</li>
</ol>
<p><strong>Language &amp; Mobility (Germany)</strong></p>
<ul>
<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>
<li>Very good English skills</li>
<li>Project-related travel readiness</li>
</ul>
<p><strong>Your Team</strong></p>
<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>
<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>
<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>
<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with a market-leading brand in every sector, while its parent organization Infosys is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe</Applyto>
      <Location>Munich, Bavaria, Germany</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>056148f9-afd</externalid>
      <Title>AI Analyst Intern</Title>
      <Description><![CDATA[<p>We are seeking a dynamic AI Analyst to help drive AI-powered quality initiatives, establish robust data governance frameworks, and develop innovative processes that bring efficiency and increase overall data quality.</p>
<p>Your Contribution:</p>
<ul>
<li>Work with subject matter experts to drive AI Technology into business processes</li>
<li>Help establish and maintain data governance programs across enterprise applications</li>
<li>Lay the foundation for data-based decision utilizing AI Technologies</li>
<li>Work with a team of highly talented individuals to understand and support the data needs of our business.</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Work with subject matter experts to drive AI Technology into business processes</li>
<li>Help establish and maintain data governance programs across enterprise applications</li>
<li>Lay the foundation for data-based decision utilizing AI Technologies</li>
<li>Work with a team of highly talented individuals to understand and support the data needs of our business.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience building predictive models, especially classification</li>
<li>Excellent understanding of machine learning techniques and AI</li>
<li>Expertise in SQL and Python, experience with NoSQL is a plus</li>
<li>A self-driven ownership mindset with a natural curiosity and excellence in finding solutions to ambiguous problems</li>
<li>Strong analytic skills related to working with unstructured datasets</li>
<li>Experience with EDWs or data lakes a plus</li>
<li>Experience with AWS cloud services</li>
<li>Junior/Senior pursuing a degree in Data Science/Analytics, Computer Science (focus on AI or Machine Learning), Information Systems/AI or related fields</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Flexible work arrangements</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Predictive models, Machine learning techniques, SQL, Python, NoSQL, Data governance, Data lakes, AWS cloud services, EDWs, Data analytics, Computer science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Logitech</Employername>
      <Employerlogo>https://logos.yubhub.co/logitech.com.png</Employerlogo>
      <Employerdescription>Logitech is a multinational company that designs and manufactures computer peripherals, software, and mobile communication products.</Employerdescription>
      <Employerwebsite>https://logitech.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://logitech.wd5.myworkdayjobs.com/en-US/Logitech/job/Camas-Washington---USA/AI-Analyst-Intern_145578</Applyto>
      <Location>Camas, Washington</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9152bb38-f8b</externalid>
      <Title>Global Detection and Response Lead</Title>
      <Description><![CDATA[<p><strong>Global Detection and Response Lead</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>San Francisco $347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Security organization exists to enable safe, responsible innovation at scale. As our systems, infrastructure, and research footprint grow, we invest deeply in world-class security capabilities that protect our people, products, and users without slowing progress.</p>
<p>This organization safeguards OpenAI’s environments by building advanced detection systems, driving real-time response capabilities, scaling telemetry and logging infrastructure, and delivering actionable threat intelligence to stay ahead of adversaries.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a <strong>Global Detection and Response Lead</strong> to own and scale OpenAI’s cybersecurity detection and response operations. In this role, you will set the strategy and drive execution for security monitoring, incident response, recovery, and post-incident improvements across our global infrastructure.</p>
<p>You will be a hands-on leader with deep technical credibility and strong operational instincts. You will build and mentor high-performing teams, partner closely with Infrastructure, Research, Product Security, Enterprise Security, IT, and Engineering, and ensure that detection and response capabilities are embedded by design into the systems that power OpenAI.</p>
<p>This is a strategic and practical leadership role requiring deep technical credibility, operational rigor, and the ability to build high-performing teams in a fast-moving environment.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Oversee global detection and response operations, including continuous monitoring, triage, investigation, containment, and remediation of security events across a diverse set of networks and infrastructure.</li>
</ul>
<ul>
<li>Lead, mentor, and directly manage several small teams of senior engineers across observability, detection and response, and threat intelligence. Hire and scale these functions deliberately and proportionately as OpenAI’s compute footprint and platform ambitions grow.</li>
</ul>
<ul>
<li>Ensure world-class operational rigor and readiness through management of incident playbooks, on-call and escalation paths, tabletop exercises, and continuous improvement of response quality and speed.</li>
</ul>
<ul>
<li>Improve detection quality and coverage by partnering with engineering teams to ensure critical telemetry is available, reliable, and actionable across cloud, corporate, and production environments.</li>
</ul>
<ul>
<li>Deeply partner across all of OpenAI to evaluate and respond to emergent security concerns in a frontier AI lab environment, such as detection and response strategies for agents operating across infrastructure at scale.</li>
</ul>
<ul>
<li>Build a world-class security program capable of withstanding tier-1 adversaries by maximally embracing our own models to solve frontier security problems.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 10+ years in cybersecurity with deep expertise in detection engineering, incident response, and security operations.</li>
</ul>
<ul>
<li>Have an active U.S. Government security clearance (Top Secret) or willingness and eligibility to obtain one.</li>
</ul>
<ul>
<li>Are mission-oriented, have unimpeachable integrity, and are passionate and motivated to detect and respond to adversaries in a highly complex, fast-paced environment.</li>
</ul>
<ul>
<li>Have deep experience building and leading detection and response, instrumentation/observability, and threat intelligence teams across a global footprint, including airgapped and sovereign environments.</li>
</ul>
<ul>
<li>Have stellar leadership skills, and a demonstrated history of driving durable, and continuous improvements to programs, processes, and people.</li>
</ul>
<ul>
<li>Have exceptional written and verbal communication skills, can remain calm under pressure, and can effectively run command of security incidents involving numerous stakeholders across a diverse gamut of teams, expertise, and seniority.</li>
</ul>
<ul>
<li>Have deep expertise in modern observability stacks (e.g., SIEM, data lakes, EDR, cloud telemetry, logging) and detection primi</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$347K – $490K</Salaryrange>
      <Skills>cybersecurity, detection engineering, incident response, security operations, observability, threat intelligence, cloud telemetry, logging, SIEM, data lakes, EDR</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence research and development. It was founded in 2015 and has since grown to become one of the leading AI research organizations in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c8855563-e744-4fa0-a497-34c8d25d2d76</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>