<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>fdff2394-c46</externalid>
      <Title>Senior Staff Geospatial Software Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Staff Geospatial Software Engineer to play a key role in building distributed analytics capabilities and enabling enterprise-wide access to scientific and operational datasets. As a member of our geospatial data engineering team, you will apply strong software craftsmanship with your knowledge of algorithms, data structures, and geospatial data models.</p>
<p>Our mission is to develop agriculture solutions for a sustainable future that help meet the challenges of feeding a global population projected to grow to over 9.6 billion by 2050. We capture petabytes of data in our operational systems, including genome sequencing data, manufacturing, supply chain, and finance systems. This requires complex analysis to extract meaningful information, and our mission is to understand that data and create software that will help make decisions at scales that has never been possible before.</p>
<p>Key responsibilities:</p>
<ul>
<li>Play a key senior role on a geospatial data engineering team, building distributed analytics capabilities and enabling enterprise-wide access to scientific and operational datasets;</li>
<li>Apply strong software craftsmanship with your knowledge of algorithms, data structures, and geospatial data models;</li>
<li>Partner with other top-level talent in data engineering, software development, and data science to tackle complex, novel problems and deliver solutions with real-world impact on global food systems;</li>
<li>Mentor and guide other data engineers in your areas of expertise with a focus on geographic information science and systems;</li>
<li>Evaluate, implement, and advocate Foss4G technologies, finding the best fit for each use case and integrating them into production-ready solutions;</li>
<li>Lead technical initiatives end-to-end, communicate your technical vision and strategy to the larger organization;</li>
<li>Drive impact across enterprise projects spanning multiple areas of the business. Strength of ideas outweighs position in the organization;</li>
<li>Share our work with the broader geospatial and software engineering community at relevant technical conferences.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>A minimum of a Bachelor&#39;s Degree in a relevant discipline (an additional 2 years of experience will be considered in lieu of a Bachelor&#39;s Degree);</li>
<li>Shipped multiple generations of a software product, demonstrating long-term technical ownership and evolution;</li>
<li>A track record of shipping and maintaining multiple major product releases written in Go or Python;</li>
<li>A track record of designing, building, and maintaining multiple product releases of data-intensive geospatial-centric APIs using a RESTful approach;</li>
<li>Extensive experience with OGC Standards services;</li>
<li>Deep knowledge of geographic science and related technologies including coordinate systems and projections with realizations, global positioning systems, spatial indexing, and spatial topologies;</li>
<li>Experience in design and implementation of Foss4G solutions, particularly leveraging GeoServer, PostGIS, and QGIS with an emphasis on vector data models;</li>
<li>Extensive experience in system design and architecture for large-scale, distributed applications;</li>
<li>Experience with creating and maintaining containerized application deployments;</li>
<li>Familiarity with developing in, deploying to, and working with Kubernetes cluster infrastructure;</li>
<li>Experience with data modeling for large scale databases;</li>
<li>Proficiency in verbal and written English language, capable of connecting with diverse individuals, actively listening to their needs, and supporting meaningful analysis for better decision-making.</li>
</ul>
<p>Bonus points for:</p>
<ul>
<li>Experience with geoarrow, geoparquet, and geopackage data formats;</li>
<li>Experience with emerging geospatial database management systems such as DuckDB Spatial and Sedona DB;</li>
<li>Experience working with distributed geospatial data warehousing (e.g. BigQuery, Snowflake) and compute (e.g. Spark, Sedona);</li>
<li>Experience implementing h3 geospatial indexing;</li>
<li>Contributions to or implementation of these OSGEO projects: GDAL/OGR, GeoServer, GeoTools, PostGIS, PROJ, QGIS, OpenLayers, Leaflet.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$123,760.00 - $185,640.00</Salaryrange>
      <Skills>Go, Python, OGC Standards services, GeoServer, PostGIS, QGIS, Kubernetes, containerized application deployments, data modeling for large scale databases, verbal and written English language, geoarrow, geoparquet, geopackage data formats, DuckDB Spatial, Sedona DB, BigQuery, Snowflake, Spark, Sedona, h3 geospatial indexing, GDAL/OGR, GeoTools, OpenLayers, Leaflet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops agriculture solutions for a sustainable future.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976931774</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f03ad2d-96f</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a Software Engineer on the Research Data Platform team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>We do not require prior ML or AI training experience. If you enjoy working closely with technical users, learning new domains quickly, and building tools people actually want to use, you&#39;ll pick up the research context fast.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines (e.g., Spark, BigQuery, DuckDB, Parquet), high-volume time series data , ingestion, storage, and efficient querying, data cataloging, lineage, or metadata management systems, or ML experiment tracking or metrics platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>large-scale ETL, columnar storage formats, query engines, high-volume time series data, data cataloging, lineage, metadata management systems, ML experiment tracking, Spark, BigQuery, DuckDB, Parquet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d0ee3e8e-4f6</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights.</p>
<p>As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>
<p>We&#39;re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter.</p>
<p><strong>About The Team</strong></p>
<p>dbt Fusion is building the next generation of data execution and connectivity infrastructure, enabling dbt workloads to run efficiently across diverse compute engines and data platforms.</p>
<p>As a Senior Engineer on the Fusion Adapters and Connectivity team, you&#39;ll design and ship core abstractions powering how dbt communicates with execution systems , leveraging Rust, Go, Arrow, and emerging open standards.</p>
<p>This is a rare opportunity to work at the intersection of systems programming, database internals, and high-visibility open-source development.</p>
<p>Your work will shape a foundational platform leveraged across the dbt ecosystem and the broader data community.</p>
<p><strong>You are a good fit if you have:</strong></p>
<ul>
<li>Strong programming background in Rust, Go, C++ or similar performance-oriented languages.</li>
</ul>
<ul>
<li>Experience designing or maintaining SDKs, libraries, connectors, or compute/data integration codebases.</li>
</ul>
<ul>
<li>Exposure to data warehouses, query engines, Arrow/columnar ecosystems, or execution runtimes.</li>
</ul>
<ul>
<li>A desire to build foundational platform components that other teams and community members rely on.</li>
</ul>
<ul>
<li>Comfort working in public code review loops, async-first communication, and collaborative RFC processes.</li>
</ul>
<ul>
<li>A mindset grounded in debuggability, reliability, and ownership in ambiguous problem spaces.</li>
</ul>
<p><strong>In this role, you can expect to:</strong></p>
<ul>
<li>Design, build, and maintain Rust-first connectivity layers, execution APIs, and adapter scaffolding.</li>
</ul>
<ul>
<li>Partner with teams building the dbt compiler, semantic layer, and runtime to evolve adapter interfaces and system boundaries.</li>
</ul>
<ul>
<li>Contribute to Arrow/ADBC and other open-source specifications or implementations, strengthening the data ecosystem.</li>
</ul>
<ul>
<li>Own CI, testing frameworks, profiling, error reporting surfaces, and release readiness for Fusion adapters.</li>
</ul>
<ul>
<li>Debug complex interoperability and performance issues across drivers, engines, and compute domains.</li>
</ul>
<ul>
<li>Collaborate with internal and community maintainers to review PRs, write RFCs, and evolve public code architectures.</li>
</ul>
<ul>
<li>Mentor engineers on systems best practices and contribute to shared patterns around resilience, debuggability, and API clarity.</li>
</ul>
<p><strong>You&#39;ll have an edge if you have:</strong></p>
<ul>
<li>Contributed to or interacted with Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse, or similar engines.</li>
</ul>
<ul>
<li>Experience shaping adapter/plugin standards, driver contracts, or architectural interfaces used by others.</li>
</ul>
<ul>
<li>Familiarity with Rust async ecosystems (tokio, tower, tracing) or Go concurrency practices.</li>
</ul>
<ul>
<li>Prior OSS governance experience , triaging issues, reviewing PRs, or working with community maintainers.</li>
</ul>
<ul>
<li>An interest in building developer-experience layers or scaffolding frameworks for adapter authors.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>6+ years experience in software engineering, with strong systems-level skills.</li>
</ul>
<ul>
<li>2+ years working in open-source, SDK, runtime, or low-level integration environments.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science / related field or equivalent experience through industry OSS contributions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C++, Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, serving over 5,400 customers and generating $100 million in annual recurring revenue.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4641221005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>