{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/title/data-engineer"},"x-facet":{"type":"title","slug":"data-engineer","display":"Data Engineer","count":18},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21f5f6c3-734"},"title":"Data Engineer","description":"<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>\n<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>\n<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>\n<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>\n<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>\n<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>\n<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>\n<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>\n<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>\n<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>\n<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>\n<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21f5f6c3-734","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-engineer","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["Data Engineering","Cloud environments","dbt","Airbyte/Fivetran","BigQuery","GCP ecosystem","Infrastructure-as-Code","Terraform","Airflow","Dagster","Python","SQL","CI/CD best practices","DevOps practices"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:06.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09a4d1ce-cde"},"title":"Data Engineer","description":"<p>We are looking for an experienced Data Engineer to partner with our Data Science and Data Infrastructure teams to own and scale our data pipelines. You&#39;ll also work closely with stakeholders across business teams including sales, marketing, and finance to ensure that the data they need arrives promptly and reliably.</p>\n<p>As a Data Engineer at Figma, you will be responsible for building and maintaining scalable data pipelines that connect various cloud data sources. You will develop a deep understanding of Figma&#39;s core data models and optimize data pipelines for scale. You will partner with the Data Science and Data Infrastructure teams to build new foundational data sets that are trusted, well understood, and enable self-service.</p>\n<p>You will work with a wide range of cross-functional stakeholders to derive requirements and architect shared datasets; ability to document, simplify and explain complex problems to different types of audiences. You will establish best practices for the development of specialized data sets for analytics and modeling.</p>\n<p>We&#39;d love to hear from you if you have:</p>\n<ul>\n<li>4+ years in a relevant field.</li>\n<li>Fluency with both SQL and Python.</li>\n<li>Familiarity with Snowflake, dbt, Dagster, and ETL/reverse ETL tools.</li>\n<li>Excellent judgment and creative problem-solving skills.</li>\n<li>A self-starting mindset along with strong communication and collaboration skills.</li>\n</ul>\n<p>While not required, it&#39;s an added plus if you also have:</p>\n<ul>\n<li>Knowledge in data modeling methodologies to design and build robust data architectures for insightful analytics.</li>\n<li>Experience with business systems such as Salesforce, Customer IO, Stripe, NetSuite is a big plus.</li>\n</ul>\n<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09a4d1ce-cde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5220003004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$348,000 USD","x-skills-required":["SQL","Python","Snowflake","dbt","Dagster","ETL/reverse ETL tools"],"x-skills-preferred":["data modeling methodologies","business systems such as Salesforce, Customer IO, Stripe, NetSuite"],"datePosted":"2026-04-18T15:51:04.727Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Snowflake, dbt, Dagster, ETL/reverse ETL tools, data modeling methodologies, business systems such as Salesforce, Customer IO, Stripe, NetSuite","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":348000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25f010f0-7d1"},"title":"Data Engineer","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>\n<p>What You’ll Do</p>\n<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our San Francisco office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>\n</ul>\n<ul>\n<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>\n</ul>\n<ul>\n<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>\n</ul>\n<ul>\n<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>\n</ul>\n<ul>\n<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>\n</ul>\n<ul>\n<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>\n</ul>\n<ul>\n<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>\n</ul>\n<ul>\n<li>2+ years of experience working with modern data transformation tools like DBT.</li>\n</ul>\n<ul>\n<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>\n</ul>\n<ul>\n<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>\n</ul>\n<ul>\n<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>\n</ul>\n<ul>\n<li>Experience working with a data warehouse such as Snowflake.</li>\n</ul>\n<ul>\n<li>Experience with a data workflow orchestrator tool such as Airflow.</li>\n</ul>\n<ul>\n<li>Experience with a programming language such as Python.</li>\n</ul>\n<ul>\n<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>\n</ul>\n<ul>\n<li>Exceptional quantitative and analytical skills.</li>\n</ul>\n<ul>\n<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>\n</ul>\n<p>Compensation:</p>\n<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25f010f0-7d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8366850002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$120,800 - $151,000","x-skills-required":["DBT","databases","SQL","data modeling","ETL/ELT processes","data warehousing solutions","Snowflake","Airflow","Python","BI tools","Looker","Tableau"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:18.514Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120800,"maxValue":151000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d204fa1-067"},"title":"Data Engineer","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Data at Brex</p>\n<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>\n<p>What You’ll Do</p>\n<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our Seattle office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>\n</ul>\n<ul>\n<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>\n</ul>\n<ul>\n<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>\n</ul>\n<ul>\n<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>\n</ul>\n<ul>\n<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>\n</ul>\n<ul>\n<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>\n</ul>\n<ul>\n<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>\n</ul>\n<ul>\n<li>2+ years of experience working with modern data transformation tools like DBT.</li>\n</ul>\n<ul>\n<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>\n</ul>\n<ul>\n<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>\n</ul>\n<ul>\n<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>\n</ul>\n<ul>\n<li>Experience working with a data warehouse such as Snowflake.</li>\n</ul>\n<ul>\n<li>Experience with a data workflow orchestrator tool such as Airflow.</li>\n</ul>\n<ul>\n<li>Experience with a programming language such as Python.</li>\n</ul>\n<ul>\n<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>\n</ul>\n<ul>\n<li>Exceptional quantitative and analytical skills.</li>\n</ul>\n<ul>\n<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>\n</ul>\n<p>Compensation:</p>\n<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d204fa1-067","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8510493002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$120,800 - $151,000","x-skills-required":["DBT","databases","SQL","data modeling","ETL/ELT processes","data warehousing solutions","Snowflake","Airflow","Python","BI tools","Looker","Tableau"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:02.393Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120800,"maxValue":151000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ec63ea6-5a3"},"title":"Data Engineer","description":"<p>At Neighbor, we&#39;re building the largest hyperlocal marketplace the world has ever seen. As a Data Engineer, you will be the core engineering resource responsible for building, scaling, and optimizing the data infrastructure that transforms raw events into high-fidelity, actionable intelligence.</p>\n<p>This engineering resource will be the cornerstone of our data infrastructure, responsible for extraction, transform, and load of the data that powers our nation-wide, best-in-class marketplace. By implementing software engineering best practices and scalable solutions, this role is critical in empowering the CEO, executive team, managers, and individual contributors with the robust and trustworthy intelligence needed to scale and innovate across our marketplace.</p>\n<p><strong>Primary Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and maintain scalable data transformation layers and code-first orchestration frameworks to ensure the delivery of high-fidelity, reusable data models</li>\n<li>Design and build robust pipelines to ingest data from diverse sources (APIs, logs, relational DBs)</li>\n<li>Ensure the reliable and timely execution of all critical data pipelines (ETLs/ELTs) to maintain data integrity and freshness</li>\n<li>Standardize analytics workflows by integrating software engineering best practices, including version control, CI/CD pipelines, and automated data validation protocols</li>\n<li>Develop and refine a robust semantic layer to facilitate self-service analytics, enabling stakeholders to derive insights without exposure to underlying architectural complexities</li>\n<li>Monitor and optimize cloud compute utilization and data model performance to ensure high availability and low-latency reporting during periods of rapid data scaling</li>\n<li>Serve as a strategic technical partner to leadership across Product, Engineering, Marketing, and Finance to align data infrastructure with organizational objectives</li>\n<li>Become a subject matter expert on the product ecosystem, user behavior, and marketing life cycles to better translate raw data into business value</li>\n<li>Serve as a versatile technical resource capable of stepping into the Data Analyst capacity when necessary,performing deep-dive quantitative analysis and building sophisticated visualizations to support executive decision-making</li>\n<li>Mentor the data analytics team on advanced technical methodologies to foster a culture of engineering excellence and data autonomy</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>3+ years of experience in data engineering or analytics engineering</li>\n<li>Bachelor&#39;s degree in quantitative and/or technical fields (Math, Physics, Statistics, Economics, Computer Science, Engineering, etc.) OR 5+ years work experience as a Data Engineer</li>\n<li>Expert-level mastery of SQL, with the ability to write, tune, and optimize complex queries for high-volume environments</li>\n<li>Strong command of at least one major programming language used for data processing</li>\n<li>Hands-on experience designing and maintaining data lakes or cloud-based data warehouses</li>\n<li>Deep understanding of data integration patterns, including data ingestion, transformation, and automated cleansing (ETL/ELT)</li>\n<li>Experience applying scientific, mathematical, or statistical techniques to analyze data and build predictive models</li>\n<li>Advanced ability to translate complex datasets into actionable narratives using modern business intelligence and reporting tools</li>\n<li>A proven track record of using quantitative analysis to solve ambiguous problems and drive strategic decision-making in a fast-paced environment</li>\n<li>Exceptional ability to collaborate with non-technical stakeholders, translating business requirements into technical specs and vice versa</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Generous Stock options</li>\n<li>Medical, dental, and vision insurance</li>\n<li>Generous PTO</li>\n<li>11 paid company holidays</li>\n<li>Hybrid work model - WFH every Monday</li>\n<li>401(k) plan</li>\n<li>Infant care leave</li>\n<li>On-site gym/showers open 24/7</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ec63ea6-5a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Neighbor","sameAs":"https://neighbor.com","logo":"https://logos.yubhub.co/neighbor.com.png"},"x-apply-url":"https://jobs.lever.co/neighbor/da1304b7-89ad-4ac0-99e8-9c0cf8284f1c","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Programming languages","Data lakes","Cloud-based data warehouses","Data integration patterns","Scientific, mathematical, or statistical techniques","Business intelligence and reporting tools"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:48:23.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"U.S."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Programming languages, Data lakes, Cloud-based data warehouses, Data integration patterns, Scientific, mathematical, or statistical techniques, Business intelligence and reporting tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58df2f04-af4"},"title":"Data Engineer","description":"<p>We are looking for a Data Engineer to join our Data Platform team to partner with our product and business stakeholders across risk, operations, and other domains. As a Data Engineer, you will be responsible for building robust data pipelines and engineering foundations by ingesting data from disparate sources, ensuring data quality and consistency, and enabling better business decisions through reliable data infrastructure across core product areas.</p>\n<p>Your primary focus will be on building scalable data pipelines using Airflow to orchestrate data workflows that ingest, transform, and deliver data from various sources into Snowflake and Databricks. You will also design and implement data models in Snowflake that support analytics, reporting, and ML use cases with a focus on performance, reliability, and scalability.</p>\n<p>In addition, you will develop infrastructure as code using Terraform to automate and manage cloud resources in AWS, ensuring consistent and reproducible deployments. You will monitor data pipeline health and implement data quality checks to ensure accuracy, completeness, and timeliness of data as business needs evolve.</p>\n<p>You will also optimize data processing workflows to improve performance, reduce costs, and handle growing data volumes efficiently. Troubleshooting and resolving data pipeline issues, working through ambiguity to get to the root cause and implementing long-term fixes will be a key part of your role.</p>\n<p>As a Data Engineer, you will bridge gaps between data and the business by working with cross-functional teams across the US and India office to understand requirements and translate them into robust technical solutions. You will create comprehensive documentation on data pipelines, data models, and infrastructure, keeping documentation up to date and facilitating knowledge transfer across the team.</p>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>2+ years of data engineering experience with strong technical skills and the ability to architect scalable data solutions.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python for data processing, automation, and building data pipelines.</li>\n</ul>\n<ul>\n<li>Proficiency with workflow orchestration tools, preferably Airflow, including DAG development, task dependencies, and monitoring.</li>\n</ul>\n<ul>\n<li>Strong SQL skills and experience with cloud data warehouses like Snowflake, including performance optimization and data modeling.</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms, preferably AWS (S3, Lambda, EC2, IAM, etc.), and understanding of cloud-based data architectures.</li>\n</ul>\n<ul>\n<li>Experience working cross-functionally with data analysts, analytics engineers, data scientists, and business stakeholders to understand requirements and deliver solutions.</li>\n</ul>\n<ul>\n<li>An ownership mentality – this engineer will be responsible for the reliability and performance of their data pipelines and expected to fully understand data flows, dependencies, and their implications on downstream users.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience with dbt for transformation logic and analytics engineering workflows integrated with data pipelines.</li>\n</ul>\n<ul>\n<li>Familiarity with Databricks for large-scale data processing, including Spark optimization and Delta Lake.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and data infrastructure.</li>\n</ul>\n<ul>\n<li>Knowledge of data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, slowly changing dimensions).</li>\n</ul>\n<ul>\n<li>Experience with CI/CD practices for data pipelines and automated testing frameworks.</li>\n</ul>\n<ul>\n<li>Experience with streaming data and real-time processing frameworks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58df2f04-af4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/e98d9733-8b8c-4ce4-997d-6cf14e35b2f3","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Python","SQL","Snowflake","Databricks","AWS","Terraform","data engineering","data pipelines","data modeling"],"x-skills-preferred":["dbt","Infrastructure as Code","CI/CD","streaming data","real-time processing"],"datePosted":"2026-04-17T12:36:30.660Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Airflow, Python, SQL, Snowflake, Databricks, AWS, Terraform, data engineering, data pipelines, data modeling, dbt, Infrastructure as Code, CI/CD, streaming data, real-time processing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51fb35f8-ae2"},"title":"Data Engineer","description":"<p>We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining large-scale data systems and pipelines. You will work closely with cross-functional teams to ensure seamless integration with existing systems and to drive business growth through data-driven insights.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop scalable data architectures using cloud-based technologies such as AWS and Azure</li>\n<li>Develop and maintain ETL processes to extract, transform, and load data from various sources</li>\n<li>Collaborate with data scientists to develop and deploy machine learning models</li>\n<li>Ensure data quality, security, and compliance with regulatory requirements</li>\n<li>Work with stakeholders to identify business needs and develop data solutions</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>3+ years of experience in data engineering or a related field</li>\n<li>Strong understanding of data architecture, design patterns, and best practices</li>\n<li>Experience with cloud-based technologies such as AWS and Azure</li>\n<li>Proficiency in programming languages such as Python, Java, or C++</li>\n<li>Excellent problem-solving skills and attention to detail</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>Experience with big data technologies such as Hadoop, Spark, or NoSQL databases</li>\n<li>Familiarity with data visualization tools such as Tableau, Power BI, or D3.js</li>\n<li>Certification in data engineering or a related field</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading technology business</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51fb35f8-ae2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Advanced Engineering","sameAs":"https://www.williamsadvancedengineering.com/","logo":"https://logos.yubhub.co/williamsadvancedengineering.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/trackside-operations-lead-hospitality-in-london-jid-494","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Azure","Python","Java","C++","ETL","data architecture","data design patterns","data quality","data security","regulatory compliance"],"x-skills-preferred":["Hadoop","Spark","NoSQL databases","Tableau","Power BI","D3.js"],"datePosted":"2026-03-12T12:01:28.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, Python, Java, C++, ETL, data architecture, data design patterns, data quality, data security, regulatory compliance, Hadoop, Spark, NoSQL databases, Tableau, Power BI, D3.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da3bf72e-353"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a data engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you&#39;ll bring.</strong></p>\n<ul>\n<li>You&#39;ll have a background in hands-on technical development, with at least 18 months&#39; of industry experience in a data engineering role or equivalent, and preferably some software industry experience.</li>\n<li>Proficiency in Scala, java, python, or a programming language associated with data engineering. Our primary language is Scala, but don&#39;t worry if that&#39;s not currently your strongest language. We believe that strong engineering principles are universal and transferable.</li>\n<li>As an expert in building and deploying production level data processing batch systems, you&#39;ll share an appreciation of what makes a high quality, operationally stable system and how to streamline all areas of development, release, and operations to achieve this.</li>\n<li>Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker and a little bit of good old Bash scripting). You&#39;ll be familiar with developing within a version-controlled process that regularly makes use of these tools and technologies.</li>\n<li>A strong technical communication ability with demonstrable experience of working in rapidly changing client environments.</li>\n<li>Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent). Importantly, you&#39;ll know the difference between varying test types (unit test, integration test) and can cite specific examples of what they have written themselves.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<ul>\n<li>Competitive salary</li>\n<li>Company bonus</li>\n<li>Annual leave, plus national holidays + your birthday off!</li>\n<li>Regularly bench-marked salary rates</li>\n<li>Well-being days</li>\n<li>Volunteer Day off</li>\n<li>Work from Home Equipment</li>\n<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Spend up to 2 months working outside of your country of employment over a rolling 12-month period with our &#39;Work from Anywhere&#39; policy</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da3bf72e-353","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/jUWNyFSzoRT8M2oK3WR2cQ/hybrid-data-engineer-in-kuala-lumpur-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Git","Gradle","Nexus","Jenkins","Docker","Bash scripting"],"x-skills-preferred":["Scala","Java","Python"],"datePosted":"2026-03-09T17:06:37.471Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Kuala Lumpur"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting, Scala, Java, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90297dff-291"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a data engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you&#39;ll bring.</strong></p>\n<ul>\n<li>You&#39;ll have a background in hands-on technical development, with at least 18 months&#39; of industry experience in a data engineering role or equivalent, and preferably some software industry experience.</li>\n<li>Proficiency in Scala, java, python, or a programming language associated with data engineering. Our primary language is Scala, but don&#39;t worry if that&#39;s not currently your strongest language. We believe that strong engineering principles are universal and transferable.</li>\n<li>As an expert in building and deploying production level data processing batch systems, you&#39;ll share an appreciation of what makes a high quality, operationally stable system and how to streamline all areas of development, release, and operations to achieve this.</li>\n<li>Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker and a little bit of good old Bash scripting). You&#39;ll be familiar with developing within a version-controlled process that regularly makes use of these tools and technologies.</li>\n<li>A strong technical communication ability with demonstrable experience of working in rapidly changing client environments.</li>\n<li>Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent). Importantly, you&#39;ll know the difference between varying test types (unit test, integration test) and can cite specific examples of what they have written themselves.</li>\n<li>Due to the nature of our client projects, candidates are required to be native or fluent in either French or Dutch.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<p><strong>We offer:</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Company bonus</li>\n<li>20 days annual leave (if you worked the previous year January – December), 12 compensation days, plus national holidays + your birthday off!</li>\n<li>Pension scheme</li>\n<li>Private Healthcare with DKV</li>\n<li>Death in Service and Income Protection</li>\n<li>Work from Home Allowance</li>\n<li>Eco Vouchers</li>\n<li>Meal Vouchers</li>\n<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Spend up to 2 months working outside of your country of employment over a rolling 12-month period with our ‘Work from Anywhere’ policy</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90297dff-291","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/bU9LVK3n4PCQuGu6MtoceK/hybrid-data-engineer-in-brussels-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Git","Gradle","Nexus","Jenkins","Docker","Bash scripting","ScalaTest"],"x-skills-preferred":[],"datePosted":"2026-03-09T17:02:53.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brussels"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting, ScalaTest"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c54b4db0-c3a"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a Data Engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c54b4db0-c3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/eBP5YPZrqR6AJqma3gpwhQ/hybrid-data-engineer-in-tokyo-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Google Cloud","Microsoft Azure","Amazon"],"x-skills-preferred":["Git","Gradle","Nexus","Jenkins","Docker","Bash scripting"],"datePosted":"2026-03-09T17:02:29.472Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Google Cloud, Microsoft Azure, Amazon, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d5e164b-74d"},"title":"Data Engineer","description":"<p><strong>Data Engineer</strong></p>\n<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? We are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us. This is an exciting opportunity for someone who grows in a creative environment, enjoys solving complex data challenges. You&#39;ll report into the Lead Data Engineer for this position and sit within the wider Data Engineer team.</p>\n<p>The Data &amp; Business Intelligence team guides our organisation to become more data-driven. Our to market changes gives us a competitive edge. By ensuring visibility of objective performance data, we empower our teams to make rapid, informed decisions that enhance overall performance.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Maintain new/current features of the data platform.</li>\n<li>Responsible for delivery of development projects.</li>\n<li>Utilise established software engineering practices and principles.</li>\n<li>Take ownership of BAU processes, develop area specific domain mastery.</li>\n<li>Ensure compliance matters are followed.</li>\n<li>Utilise CI/CD and infrastructure as code (Terraform) for rapid deployment of changes.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure.</li>\n<li>SQL development skills.</li>\n<li>Demonstrated strength in data modelling, ETL development, and data warehousing.</li>\n<li>Knowledge of data management fundamentals and data storage principles.</li>\n<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems.</li>\n</ul>\n<p><strong>What&#39;s in it for you</strong></p>\n<p>The expected range for this role is £45,000 - £50,000. This is a Hybrid role from our Bath Office, working three days from the office, two from home. Plus more great perks, which include:</p>\n<ul>\n<li>Uncapped leave, because we trust you to manage your workload and time.</li>\n<li>When we hit our targets, enjoy a share of our profits with a bonus.</li>\n<li>Refer a friend and get rewarded when they join Future.</li>\n<li>Wellbeing support with access to our Colleague Assistant Programmes.</li>\n<li>Opportunity to purchase shares in Future, with our Share Incentive Plan.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d5e164b-74d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Future","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/BDB1B6F4CF","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"£45,000 - £50,000","x-skills-required":["Python","Google Cloud Platform","BigQuery","DataFlow","Apache Beam","Cloud Run Functions","Cloud Run","Cloud Workflows","Cloud Composure","SQL","data modelling","ETL development","data warehousing","data management fundamentals","data storage principles","statistical models","data mining algorithms"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:19:49.877Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":45000,"maxValue":50000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ecdc5591-27d"},"title":"Data Engineer","description":"<p>We are seeking a highly skilled Data Engineer to join our team. As a Data Engineer, you will play a key role in the development and maintenance of our data infrastructure, ensuring that our data is accurate, reliable, and secure.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, develop, and maintain data pipelines and architectures to support our data-driven decision-making processes</li>\n<li>Collaborate with our data scientists and analysts to understand their data requirements and develop solutions to meet those needs</li>\n<li>Work closely with our IT team to ensure that our data systems are integrated with our existing infrastructure</li>\n<li>Develop and maintain data quality and governance processes to ensure that our data is accurate and reliable</li>\n<li>Participate in the development and maintenance of our data architecture roadmap</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Mathematics, or a related field</li>\n<li>2+ years of experience in data engineering or a related field</li>\n<li>Strong understanding of data engineering principles and practices</li>\n<li>Experience with data warehousing and business intelligence tools</li>\n<li>Strong programming skills in languages such as Python, Java, or C++</li>\n<li>Experience with cloud-based data platforms such as AWS or GCP</li>\n<li>Strong communication and collaboration skills</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading Formula One racing team</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development and growth opportunities</li>\n<li>Access to state-of-the-art technology and tools</li>\n<li>Flexible working hours and remote work options</li>\n</ul>\n<p>Note: The salary range for this position is competitive and will be discussed during the interview process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ecdc5591-27d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Racing","sameAs":"https://careers.williamsf1.com","logo":"https://logos.yubhub.co/careers.williamsf1.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/trackside-operations-lead-hospitality-in-london-jid-487","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive and will be discussed during the interview process","x-skills-required":["data engineering","data warehousing","business intelligence","Python","Java","C++","AWS","GCP"],"x-skills-preferred":["cloud computing","data architecture","data governance"],"datePosted":"2026-03-09T11:13:44.313Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"data engineering, data warehousing, business intelligence, Python, Java, C++, AWS, GCP, cloud computing, data architecture, data governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67298629-048"},"title":"Data Engineer","description":"<p><strong>Tasks</strong></p>\n<p>The Data Engineer role at Porsche Engineering Romania is essential to digitalization transformation, providing the technical foundation for AI solutions. Working in a SAFe environment, the role supports digital transformation by designing data models, building reliable data pipelines, and ensuring high data quality.</p>\n<p>You will design and implement application data models, integrating and processing data from various sources, developing automated data extraction, transformation, and integration pipelines, ensuring data quality and lineage, and orchestrating data architectures for cross-functional teams. You will collaborate with international project teams.</p>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>You have a Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>\n<li>You have experience with SQL.</li>\n<li>You have knowledge of software development for data pipelines and visualization.</li>\n<li>You are familiar with data management and pipeline deployment.</li>\n<li>You have programming experience (Python).</li>\n<li>You have interest in building cloud-based data transformation pipelines.</li>\n<li>You want to develop data-lake platforms skills (Azure Databricks).</li>\n<li>You have knowledge of CI/CD and containerization tools (e.g., GitHub Actions).</li>\n<li>You speak English fluently; knowledge of German is considered a plus.</li>\n<li>You’re a team player with a service-oriented working style, and as a self-confident person, you always have an eye for the essentials.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Knowledge-Sharing: Working with new technologies and employing innovative methodologies has always been a part of our day-to-day operations, therefore knowledge sharing is an essential part of our work culture. We regularly organize internal technical meetups to build bridges between departments, share best practices, and learn about new ways of thinking.</li>\n<li>Collaboration with Universities and Master’s Program: On average, our company offers 12 master&#39;s thesis topics per year, based on allocation interviews for the top-performing students, and 10 Porsche Engineering scholarships for master&#39;s program students with outstanding technical and soft skills.</li>\n<li>Expanding our Know-How: It is important to stay ahead of technological trends in order to respond to ever-changing consumer needs, such as making cars safer and more enjoyable to drive. Based on project demands, we provide internal and external soft and technical skill trainings to meet these requirements.</li>\n<li>Performance Running in our Blood: We are part of the Porsche family that specializes in high-performance sports cars, so getting involved in various sporting events and supporting our employees in their quest to become better athletes comes naturally to us.</li>\n<li>Community Support: As part of a very active social community, we take our commitment seriously when it comes to supporting programs and initiatives that make lives better and provide renewed opportunities for children and adults alike. Giving back to the community, whether through donations or employee volunteering, through the most appropriate social causes that reflect our values and culture, is our primary focus.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67298629-048","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Romania","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19822","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Azure Databricks","GitHub Actions","CI/CD","containerization"],"x-skills-preferred":["cloud-based data transformation pipelines","data-lake platforms"],"datePosted":"2026-03-09T11:10:48.311Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj-Napoca"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"SQL, Python, Azure Databricks, GitHub Actions, CI/CD, containerization, cloud-based data transformation pipelines, data-lake platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b017ec97-c21"},"title":"Data Engineer","description":"<p><strong>Data Engineer Role</strong></p>\n<p>The Data Engineer role at Porsche Engineering Romania is essential to digitalization transformation, providing the technical foundation for AI solutions. Working in a SAFe environment, the role supports digital transformation by designing data models, building reliable data pipelines, and ensuring high data quality.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement application data models</li>\n<li>Integrate and process data from various sources</li>\n<li>Develop automated data extraction, transformation, and integration pipelines</li>\n<li>Ensure data quality and lineage</li>\n<li>Orchestrate data architectures for cross-functional teams</li>\n<li>Collaborate with international project teams</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education</li>\n<li>Experience with SQL</li>\n<li>Knowledge of software development for data pipelines and visualization</li>\n<li>Familiarity with data management and pipeline deployment</li>\n<li>Programming experience (Python)</li>\n<li>Interest in building cloud-based data transformation pipelines</li>\n<li>Development of data-lake platforms skills (Azure Databricks)</li>\n<li>Knowledge of CI/CD and containerization tools (e.g., GitHub Actions)</li>\n<li>Fluent English; knowledge of German is considered a plus</li>\n<li>Team player with a service-oriented working style, and as a self-confident person, you always have an eye for the essentials</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Knowledge-Sharing: Working with new technologies and employing innovative methodologies has always been a part of our day-to-day operations, therefore knowledge sharing is an essential part of our work culture.</li>\n<li>Collaboration with Universities and Master’s Program: On average, our company offers 12 master&#39;s thesis topics per year, based on allocation interviews for the top-performing students, and 10 Porsche Engineering scholarships for master&#39;s program students with outstanding technical and soft skills.</li>\n<li>Expanding our Know-How: It is important to stay ahead of technological trends in order to respond to ever-changing consumer needs, such as making cars safer and more enjoyable to drive.</li>\n<li>Performance Running in our Blood: We are part of the Porsche family that specializes in high-performance sports cars, so getting involved in various sporting events and supporting our employees in their quest to become better athletes comes naturally to us.</li>\n<li>Community Support: As part of a very active social community, we take our commitment seriously when it comes to supporting programs and initiatives that make lives better and provide renewed opportunities for children and adults alike.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b017ec97-c21","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Romania","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19821","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","software development for data pipelines and visualization","data management and pipeline deployment","Python","Azure Databricks","CI/CD and containerization tools (e.g., GitHub Actions)"],"x-skills-preferred":["cloud-based data transformation pipelines","data-lake platforms skills"],"datePosted":"2026-03-09T11:07:37.017Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj-Napoca"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"SQL, software development for data pipelines and visualization, data management and pipeline deployment, Python, Azure Databricks, CI/CD and containerization tools (e.g., GitHub Actions), cloud-based data transformation pipelines, data-lake platforms skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6ea8846c-bf3"},"title":"Data Engineer","description":"<p><strong>Data Engineer</strong></p>\n<p>We are seeking a highly skilled Data Engineer to join our Data and Analytics team. As a Data Engineer, you will play a key role in the development and maintenance of our data infrastructure, ensuring that our data is accurate, reliable, and easily accessible to our teams.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, develop, and maintain data pipelines and architectures to support our data-driven decision-making processes</li>\n<li>Collaborate with cross-functional teams to identify data requirements and develop solutions to meet those needs</li>\n<li>Work closely with our data scientists to ensure that our data is accurate, complete, and easily accessible</li>\n<li>Develop and maintain data visualizations and reports to support our business needs</li>\n<li>Troubleshoot data-related issues and implement solutions to prevent future occurrences</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Mathematics, or a related field</li>\n<li>2+ years of experience in data engineering or a related field</li>\n<li>Strong understanding of data structures, algorithms, and software design patterns</li>\n<li>Experience with data warehousing and business intelligence tools</li>\n<li>Strong programming skills in languages such as Python, Java, or C++</li>\n<li>Experience with cloud-based data platforms such as AWS or GCP</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading Formula One team</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p><strong>What We Offer</strong></p>\n<ul>\n<li>A competitive salary and benefits package</li>\n<li>The opportunity to work with a leading Formula One team</li>\n<li>A collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you are a motivated and experienced Data Engineer looking for a new challenge, please submit your application, including your CV and a cover letter, to [insert contact email or link to application portal].</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6ea8846c-bf3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Racing","sameAs":"https://careers.williamsf1.com","logo":"https://logos.yubhub.co/careers.williamsf1.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/executive-office-coordinator-in-grove-wantage-jid-491","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data pipelines","data architecture","data visualization","data warehousing","business intelligence","Python","Java","C++","AWS","GCP"],"x-skills-preferred":["cloud computing","big data","machine learning"],"datePosted":"2026-03-09T10:12:09.144Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"data engineering, data pipelines, data architecture, data visualization, data warehousing, business intelligence, Python, Java, C++, AWS, GCP, cloud computing, big data, machine learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901a6402-db5"},"title":"Data Engineer","description":"<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Python and SQL</li>\n<li>Hands-on experience with Redshift, Airflow, DBT</li>\n<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901a6402-db5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Redshift","Airflow","DBT","Apache Spark"],"x-skills-preferred":["Apache Flink","Apache Kafka","Hadoop ecosystem components","ETL design patterns","performance tuning"],"datePosted":"2025-12-26T10:57:30.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chengdu"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e330a898-308"},"title":"Data Engineer","description":"<p><strong>What you&#39;ll do</strong></p>\n<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>\n<ul>\n<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>\n<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>\n<li>You will implement data models aligned with KPI monitoring requirements.</li>\n<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>\n<li>You will implement audit and logging mechanisms for sensitive data.</li>\n<li>You will document data flows, architecture, and operational procedures.</li>\n<li>You will collaborate with international project teams</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>\n<li>You have 3+ years of proven experience in data engineering projects.</li>\n<li>You have strong skills in Python, SQL, and PySpark.</li>\n<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>\n<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>\n<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>\n<li>You have a good understanding of data security and compliance for sensitive information.</li>\n<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>\n<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e330a898-308","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Services GmbH","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18980","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","AWS services","data modeling","KPI reporting","data security","DevOps practices"],"x-skills-preferred":["SAP HANA","Java","IoT in the automotive domain"],"datePosted":"2025-12-08T16:38:07.363Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Timisoara"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, AWS services, data modeling, KPI reporting, data security, DevOps practices, SAP HANA, Java, IoT in the automotive domain"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a0ca0eaa-e37"},"title":"Data Engineer","description":"<p><strong>What you&#39;ll do</strong></p>\n<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>\n<ul>\n<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>\n<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>\n<li>You will implement data models aligned with KPI monitoring requirements.</li>\n<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>\n<li>You will implement audit and logging mechanisms for sensitive data.</li>\n<li>You will document data flows, architecture, and operational procedures.</li>\n<li>You will collaborate with international project teams</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>\n<li>You have 3+ years of proven experience in data engineering projects.</li>\n<li>You have strong skills in Python, SQL, and PySpark.</li>\n<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>\n<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>\n<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>\n<li>You have a good understanding of data security and compliance for sensitive information.</li>\n<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>\n<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a0ca0eaa-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Services GmbH","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18979","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","data modeling","KPI reporting","AWS services","monitoring frameworks","data security","DevOps practices"],"x-skills-preferred":["SAP HANA","Java","IoT"],"datePosted":"2025-12-08T16:37:58.711Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, data modeling, KPI reporting, AWS services, monitoring frameworks, data security, DevOps practices, SAP HANA, Java, IoT"}]}