{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/pyspark"},"x-facet":{"type":"skill","slug":"pyspark","display":"Pyspark","count":22},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bed89d15-812"},"title":"Power BI Developer","description":"<p>Design, develop, and maintain Power BI dashboards and reports to support business decision-making.</p>\n<p>Translate business requirements into data models, metrics, and visualizations.</p>\n<p>Build and optimize Power BI data models using DAX, Power Query, and best practices.</p>\n<p>Ensure performance optimization and scalability of Power BI datasets and reports.</p>\n<p>Work with data platforms (e.g., Databricks, Microsoft Fabric, or similar) to prepare and transform data for analytics.</p>\n<p>Collaborate with data engineers, analysts, and business stakeholders to deliver end-to-end analytics solutions.</p>\n<p>Implement data quality checks and documentation for datasets and reporting solutions.</p>\n<p>As a Power BI Developer at MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will work with a team of experts to deliver high-quality analytics solutions to our customers.</p>\n<p>The ideal candidate will have strong experience with Power BI development, proficiency in SQL, and hands-on experience with modern data platforms such as Databricks or Microsoft Fabric. They will also have practical knowledge of Python or PySpark for data processing and experience building end-to-end reporting solutions from raw data to dashboards.</p>\n<p>We value the authenticity that comes from bringing your individual strengths into the team. Diversity plays a key role in our culture, and it brings different visions &amp; flavors into the mix.</p>\n<p>We all share a strong team spirit. Every win, big or small, belongs to all of us.</p>\n<p>We always welcome curiosity, creativity, and unconventional thinking patterns.</p>\n<p>We recognize the importance of healthy, tight-knit communities and sustainable environmental changes, and we strive to enact positive change in any form within our reach.</p>\n<p>We’re here to co-create your ideal career growth plan tailored to your professional aspirations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bed89d15-812","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20076","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Power BI","DAX","Power Query","SQL","Python","PySpark","Databricks","Microsoft Fabric"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:26:30.097Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"Power BI, DAX, Power Query, SQL, Python, PySpark, Databricks, Microsoft Fabric"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e58b08f7-c31"},"title":"Senior Data Engineer","description":"<p>As a Senior Data Engineer on the Analytics Team, you will collaborate with stakeholders across the company to design, build and implement data pipelines and models that enable our next generation of technology to be deployed around the world. You will have a hand in helping shape the data platform vision at Anduril.</p>\n<p>We&#39;re looking for software and data engineers who are seeking high impact collaborative roles focused on driving operational execution. Ideally you are looking to learn what it takes to build the next generation of defence technology.</p>\n<p>Your responsibilities will include leading the design and roadmap for our data platform, partnering with operations, product, and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs, owning the ingest and egress frameworks for data pipelines that stitch together various data sources in order to produce valuable data products that drive the business, and managing a large user base and providing true data self-service at scale.</p>\n<p>We use Palantir Foundry as our central hub for data-driven applications, visualizations and large-scale data analysis across the Anduril org. We also use SQLMesh for data transformations, Athena for querying data, Apache Iceberg as our table format, and Flyte for orchestration.</p>\n<p>Required qualifications include 5+ years of experience in a data engineering role building products, ideally in a fast-paced environment, good foundations in Python or another language, experience with Spark, PySpark, SQL and dbt, experience with Enterprise Data Systems like Palantir Foundry, and experience with or interest in learning how to develop data services and data products.</p>\n<p>The salary range for this role is $166,000-$220,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e58b08f7-c31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/4587312007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["Python","Spark","PySpark","SQL","dbt","Palantir Foundry","SQLMesh","Athena","Apache Iceberg","Flyte"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:44.003Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Spark, PySpark, SQL, dbt, Palantir Foundry, SQLMesh, Athena, Apache Iceberg, Flyte","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62900fcd-562"},"title":"Security Engineer - Offensive Security","description":"<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>\n<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>\n<p>Responsibilities:</p>\n<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>\n<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>\n<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>\n<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>\n<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>\n<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>\n<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>\n<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>\n<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>\n<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>\n<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>\n<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>\n<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>\n<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62900fcd-562","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7820898","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Go","Web application security","Cloud platforms (AWS, Azure, or GCP)","Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound)","Adversary tradecraft and frameworks (MITRE ATT&CK)","Excellent written and verbal communication skills"],"x-skills-preferred":["Experience conducting offensive security in fintech, financial services, or other highly regulated environments","Background in vulnerability research, exploit development, or CVE discovery","Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations)","Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support","Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"],"datePosted":"2026-04-18T15:51:01.913Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e2f537b7-0f0"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems with the Databricks Data Intelligence Platform.</p>\n<p>As a Delivery Solutions Architect (DSA), you are a trusted technical advisor to key customers, providing expert guidance that translates data, analytics, and AI challenges into high-impact business value.</p>\n<p>You help design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and customer enablement.</p>\n<p>Internally, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers.</p>\n<p>Delivery Solutions Architects (DSAs) are trusted technical advisors embedded within the customer organization, providing expert guidance that translates data and AI challenges into high-impact business value.</p>\n<p>They help you design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and team enablement.</p>\n<p>DSAs focus on:</p>\n<ul>\n<li>Designing secure, scalable architecture</li>\n</ul>\n<ul>\n<li>Aligning people, processes, and technology</li>\n</ul>\n<ul>\n<li>Establishing trusted advisor relationships</li>\n</ul>\n<ul>\n<li>Leveraging the broader ecosystem of Databricks experts</li>\n</ul>\n<p>This is a hybrid technical and commercial role.</p>\n<p>Technically, the expectations are that you become the post-sales technical lead and trusted advisor across all Databricks products for the customer&#39;s top priority use cases.</p>\n<p>This requires you to use your technical skills and credibility to engage and communicate with technical/technical leadership stakeholders in our customer organizations, do architecture reviews, help with performance and cost optimizations, demonstrate new capabilities, remove blockers, etc.</p>\n<p>In parallel, it is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving onboarding plans.</p>\n<p>While not a hands-on-keyboard role, this is a highly technical position where architectural skills in fields such as Data Architecture, Data Engineering, Data Warehousing, or Data Science are essential.</p>\n<p>You will report directly to a DSA Manager within the Field Engineering organization.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Be the Databricks Architect working with customer technical teams working on use cases/data products, from development to go-live, addressing any technical challenges and blockers and providing guidance, best practices, and enablement</li>\n</ul>\n<ul>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n</ul>\n<ul>\n<li>Be the internal point of contact for any questions related to production/go live status of agreed-upon use cases within an account, often for multiple use cases within the largest and most complex organizations</li>\n</ul>\n<ul>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical teams to address the tasks that are beyond your scope of activities or expertise</li>\n</ul>\n<ul>\n<li>Create and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n</ul>\n<ul>\n<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>\n</ul>\n<ul>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from &#39;win&#39; to production</li>\n</ul>\n<ul>\n<li>Enablement/user growth plan</li>\n</ul>\n<ul>\n<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>\n</ul>\n<ul>\n<li>Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization)</li>\n</ul>\n<ul>\n<li>Executive and operational governance</li>\n</ul>\n<ul>\n<li>Provide internal and external updates</li>\n</ul>\n<ul>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression</li>\n</ul>\n<ul>\n<li>to your Technical GM</li>\n</ul>\n<ul>\n<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs, presenting them to the customers when applicable for their ongoing developments</li>\n</ul>\n<ul>\n<li>internal and external updates</li>\n</ul>\n<ul>\n<li>KPI reporting on the status of usage and customer health, risks, and blockers, product adoption, and use case progression</li>\n</ul>\n<ul>\n<li>to your Field Engineering leadership</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6-10 years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI and where you can contribute to technical debate and design choices with customers</li>\n</ul>\n<ul>\n<li>Programming experience in PySpark, SQL, or Scala</li>\n</ul>\n<ul>\n<li>Understanding and hands-on experience of solution architecture-related distributed data and analytics systems</li>\n</ul>\n<ul>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles</li>\n</ul>\n<ul>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n</ul>\n<ul>\n<li>Technical program coordination including account and stakeholder management</li>\n</ul>\n<ul>\n<li>Experience resolving complex and important escalation with senior customer technical stakeholders</li>\n</ul>\n<ul>\n<li>Track record of overachievement against quota, goals, or similar objective targets</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n</ul>\n<ul>\n<li>Can travel up to 30%</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake, and MLflow.</p>\n<p>To learn more, follow Databricks on Twitter, LinkedIn, and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e2f537b7-0f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8368003002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PySpark","SQL","Scala","Data Architecture","Data Engineering","Data Warehousing","Data Science"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:15.902Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Italy"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PySpark, SQL, Scala, Data Architecture, Data Engineering, Data Warehousing, Data Science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a3dc5a7-8d9"},"title":"Senior Analytics Engineer","description":"<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>\n<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>\n<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>\n<p>Responsibilities:</p>\n<p>Data Modeling &amp; Semantics</p>\n<ul>\n<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>\n<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>\n<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>\n<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>\n</ul>\n<p>AI-Ready Data &amp; Snowflake Ecosystem</p>\n<ul>\n<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>\n<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>\n<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>\n</ul>\n<p>Data Quality, Governance &amp; Trust</p>\n<ul>\n<li>Implement robust testing, validation, and documentation practices in dbt</li>\n<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>\n<li>Apply data governance best practices, including access controls, lineage, and auditability</li>\n<li>Partner across teams to establish clear ownership and accountability for data assets</li>\n</ul>\n<p>Collaboration &amp; Delivery</p>\n<ul>\n<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>\n<li>Support self-service analytics by building intuitive, reusable datasets</li>\n<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>\n<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>\n</ul>\n<p>AI and Data Mindset</p>\n<ul>\n<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>\n<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>\n<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>\n<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>\n<li>Strong SQL skills and experience building analytics-ready data models</li>\n<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>\n<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>\n<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>\n<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>\n<li>Experience translating business requirements into clear, maintainable data logic</li>\n<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>\n<li>Experience with data quality, testing, and documentation best practices</li>\n<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>\n<li>Experience with BI tools such as Tableau or Looker</li>\n<li>Strong communication skills and ability to work across technical and business teams</li>\n</ul>\n<p>What you can look forward to as an Okta employee!</p>\n<ul>\n<li>Amazing Benefits</li>\n<li>Making Social Impact</li>\n<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>\n<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a3dc5a7-8d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7818510","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["dbt","Snowflake","SQL","data modeling","dimensional modeling","semantic design","ETL","data quality","testing","documentation","Python","R","PySpark","Tableau","Looker"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:30.556Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington; Chicago, Illinois; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe828503-8d1"},"title":"Senior Delivery Solutions Architect","description":"<p>We are seeking a Senior Delivery Solutions Architect to join our Field Engineering team in Paris. As a Senior Delivery Solutions Architect, you will be a trusted technical advisor to key customers, providing expert guidance that translates data, analytics, and AI challenges into high-impact business value.</p>\n<p>You will help design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and customer enablement. Internally, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing secure, scalable architecture</li>\n<li>Aligning people, processes, and technology</li>\n<li>Establishing trusted advisor relationships</li>\n<li>Leveraging the broader ecosystem of Databricks experts</li>\n</ul>\n<p>This is a hybrid technical and commercial role. Technically, the expectations are that you become the post-sales technical lead and trusted advisor across all Databricks products for the customer&#39;s top priority use cases. This requires you to use your technical skills and credibility to engage and communicate with technical/technical leadership stakeholders in our customer organizations, do architecture reviews, help with performance and cost optimizations, demonstrate new capabilities, remove blockers, etc.</p>\n<p>In parallel, it is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving onboarding plans.</p>\n<p>While not a hands-on-keyboard role, this is a highly technical position where architectural skills in fields such as Data Architecture, Data Engineering, Data Warehousing, or Data Science are essential.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe828503-8d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8298587002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Programming experience in PySpark, SQL, or Scala","Understanding and hands-on experience of solution architecture-related distributed data and analytics systems","10+ years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI","Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles","Understanding of how to attribute business value and outcomes to specific project deliverables"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:11.017Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Programming experience in PySpark, SQL, or Scala, Understanding and hands-on experience of solution architecture-related distributed data and analytics systems, 10+ years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI, Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles, Understanding of how to attribute business value and outcomes to specific project deliverables"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6d7f1a0-882"},"title":"Resident Solutions Architect - Mumbai","description":"<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>\n<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>\n<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>\n<li>Providing expert-level technical guidance and support to customers during the implementation process</li>\n<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>\n<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>\n<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>\n<li>Comfortable writing code in Python, PySpark, and Scala</li>\n<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>\n<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>\n<li>Expertise in Azure</li>\n<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>\n<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>\n<li>Ability to work with Partner Organization and deliver complex programs</li>\n<li>Ability to lead large technical delivery teams</li>\n<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>\n<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>\n<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>\n<li>Willingness to travel for onsite customer engagements within India</li>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<p>Good-to-have Skills:</p>\n<ul>\n<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>\n<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>\n<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>\n<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>\n<li>Expertise on cloud platforms like AWS and GCP</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6d7f1a0-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8107166002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Data Lakes","Python","PySpark","Scala","SQL","Spark SQL","Spark-streaming","Azure","NoSQL databases","data governance","security","Unity Catalog","RBAC"],"x-skills-preferred":["ML libraries/frameworks","MLOps tools and processes","LLM and GenAI solutions","Hadoop","Trino","Ranger","AWS","GCP"],"datePosted":"2026-04-18T15:45:04.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mumbai, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62b2a5a2-9bd"},"title":"Big Data Solutions Architect (Professional Services)","description":"<p>As a Big Data Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Working with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guiding strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consulting on architecture and design; bootstrapping or implementing customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<ul>\n<li>Providing an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Working with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Working with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Strong expertise in data warehousing concepts, architecture, and migration strategies</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python, Pyspark or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Data Science expertise is a nice-to-have</li>\n</ul>\n<ul>\n<li>Travel to customers 10-20% of the time</li>\n</ul>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62b2a5a2-9bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8482697002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","data warehousing","migration strategies","Python","Pyspark","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:16.680Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data warehousing, migration strategies, Python, Pyspark, Scala, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b23d070f-1b3"},"title":"FBS Sr Data Engineer (Insurance Experience)","description":"<p>FBS – Farmer Business Services is part of Farmers operations. We&#39;re building a global approach to identifying, recruiting, hiring, and retaining top talent. Our goal is to create diverse and high-performing teams that thrive in today&#39;s competitive marketplace.</p>\n<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That&#39;s where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>As a Data Engineer with a strong P&amp;C insurance background, you&#39;ll analyse business data stories and translate them into technical requirements. You&#39;ll design, build, test, and implement data products of various complexity with minimal guidance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Acquire, curate, and publish data for analytical or operational uses</li>\n<li>Ensure data is in a ready-to-use form that creates a single version of the truth across all data consumers</li>\n<li>Translate business analytic requests/requirements into design, development, testing, deployment, and production maintenance tasks</li>\n<li>Work independently with various technologies from big data, relational and non-relational databases, cloud environments, different programming languages, and various reporting tools</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>4-6 years of experience as a Data Engineer with ETL using SQL</li>\n<li>At least 2 years Insurance Background – Ideally P&amp;C, flexible for insurance in other areas Health, Life if upskilling is possible</li>\n<li>Advanced SQL skills</li>\n<li>Advanced DBT/ Informatica skills</li>\n<li>Intermediate Snowflake skills</li>\n<li>Intermediate Python/ PySpark skills</li>\n<li>Intermediate Shell Scripting skills</li>\n<li>Intermediate Power BI skills</li>\n<li>Strong communication and problem-solving skills</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Private Health Insurance</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b23d070f-1b3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/5qzBuoS2KyVBegQHDpzsPN/remote-fbs-sr-data-engineer-(insurance-experience)-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","DBT/ Informatica","Snowflake","Python/ PySpark","Shell Scripting","Power BI"],"x-skills-preferred":["Agile methodology","Software development","data development","Commercial / Business Insurance"],"datePosted":"2026-03-09T17:07:40.702Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, DBT/ Informatica, Snowflake, Python/ PySpark, Shell Scripting, Power BI, Agile methodology, Software development, data development, Commercial / Business Insurance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a56a653-c18"},"title":"Palantir Engineer Specialist - Sr. Consultant - Principal","description":"<p><strong>Palantir Engineer Specialist</strong></p>\n<p><strong>Sr. Consultant - Principal</strong></p>\n<p><strong>London</strong></p>\n<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organisation allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p><strong>About Your Role</strong></p>\n<p>As a <strong>Senior Consultant / Principal Consultant – Palantir Engineer</strong>, you lead and deliver end-to-end, data-driven solutions using <strong>Palantir Foundry</strong> in complex client environments. You operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions. You combine strong hands-on technical skills with a consulting mindset, taking ownership of solution design, implementation, and adoption across organisations.</p>\n<p><strong>Your role will include:</strong></p>\n<ul>\n<li>Own the <strong>end-to-end delivery</strong> of Palantir Foundry–based solutions, from problem definition to production</li>\n<li>Design and implement <strong>data pipelines and transformations</strong> across diverse data sources</li>\n<li>Model data using <strong>Foundry Ontology</strong> concepts to support analytics and operational use cases</li>\n<li>Build scalable, reliable solutions using <strong>Python, SQL, and PySpark</strong> within Foundry</li>\n<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>\n<li>Support <strong>prototyping, productionisation, and scaling</strong> of data-driven applications</li>\n<li>Ensure solutions meet requirements for <strong>data quality, governance, security, and performance</strong></li>\n<li>Act as a technical advisor within project teams and contribute to best practices</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you bring – required</strong></p>\n<p><strong>Experience &amp; Seniority</strong></p>\n<ul>\n<li>Proven experience as a <strong>Senior Consultant or Principal Consultant</strong> in data, analytics, or platform engineering</li>\n<li>Strong experience delivering <strong>client-facing data solutions</strong> in complex environments</li>\n<li>Ability to take ownership and work independently in ambiguous problem spaces</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong programming skills in <strong>Python</strong> and <strong>SQL</strong>; <strong>PySpark</strong> experience required</li>\n<li>Hands-on experience with <strong>Palantir Foundry</strong>, including:</li>\n<li>Pipeline Builder / Code Workbook</li>\n<li>Data integration and transformation</li>\n<li>Ontology modelling and data lineage</li>\n<li>Solid understanding of <strong>data architectures</strong>, including data lakes, lakehouses, and data warehouses</li>\n<li>Experience working with APIs, databases, and structured / semi-structured data</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience building <strong>scalable ETL/ELT pipelines</strong></li>\n<li>Familiarity with <strong>CI/CD concepts</strong>, testing, and production deployments</li>\n<li>Strong focus on <strong>solution quality, maintainability, and performance</strong></li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field <strong>or equivalent practical experience</strong></li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Experience with <strong>cloud platforms</strong> (AWS, Azure, GCP)</li>\n<li>Familiarity with <strong>containerisation</strong> (Docker, Kubernetes)</li>\n<li>Prior experience as a <strong>Palantir FDE</strong> or in Foundry-heavy delivery roles</li>\n<li>Domain experience in industries such as <strong>Energy, Finance, Public Sector, Healthcare, or Logistics</strong></li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>About your team</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognised as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognised by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a56a653-c18","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/2A8U1ryerVijb4fFAc6i8u/hybrid-palantir-engineer-specialist---sr.-consultant---principal-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","Palantir Foundry","Pipeline Builder","Code Workbook","Data integration","Data transformation","Ontology modelling","Data lineage","Data architectures","Data lakes","Lakehouses","Data warehouses","APIs","Databases","Structured data","Semi-structured data","ETL/ELT pipelines","CI/CD concepts","Testing","Production deployments","Solution quality","Maintainability","Performance","Bachelor’s degree","Master’s degree","Computer Science","Engineering","Mathematics"],"x-skills-preferred":["Cloud platforms","Containerisation","Palantir FDE","Foundry-heavy delivery roles","Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"],"datePosted":"2026-03-09T16:59:40.750Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, Palantir Foundry, Pipeline Builder, Code Workbook, Data integration, Data transformation, Ontology modelling, Data lineage, Data architectures, Data lakes, Lakehouses, Data warehouses, APIs, Databases, Structured data, Semi-structured data, ETL/ELT pipelines, CI/CD concepts, Testing, Production deployments, Solution quality, Maintainability, Performance, Bachelor’s degree, Master’s degree, Computer Science, Engineering, Mathematics, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_631fb3fa-c57"},"title":"FBS Senior Data Engineer : Finance Domain","description":"<p>We&#39;re seeking a Senior Data Engineer with a strong P&amp;C insurance background to join our team. As a Senior Data Engineer, you will be responsible for analysing business data stories and translating them into technical requirements. You will design, build, test, and implement data products of various complexity with minimal guidance.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, test, and implement data products of various complexity</li>\n<li>Analyse business data stories and translate them into technical requirements</li>\n<li>Work with finance data, banks, or insurance sector</li>\n<li>Use agile methodology, software development, and data development</li>\n<li>Use Python/PySpark, Shell Scripting, and Power BI</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Advanced skills in DBT, Snowflake, and SQL</li>\n<li>Experience in the financial industry</li>\n<li>Intermediate skills in Python/PySpark, Shell Scripting, and Power BI</li>\n<li>Agile methodology, software development, and data development</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits package</li>\n<li>Comprehensive benefits package</li>\n<li>Career development and training opportunities</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Dynamic and inclusive work culture within a globally renowned group</li>\n<li>Private Health Insurance</li>\n<li>Pension Plan</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development</li>\n</ul>\n<p>Note: Benefits differ based on employee level.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_631fb3fa-c57","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/ek6Tt5quFeduFHWtqok7zx/hybrid-fbs-senior-data-engineer-%3A-finance-domain-in-pune-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["DBT","Snowflake","SQL","Python/PySpark","Shell Scripting","Power BI"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:54:54.478Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, Maharashtra, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"DBT, Snowflake, SQL, Python/PySpark, Shell Scripting, Power BI"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9d674bed-4cf"},"title":"FBS Senior Data Engineer : Finance Domain","description":"<p>We seek a Data Engineer with a strong P&amp;C insurance background to analyze business data stories and translate them into technical requirements. The ideal candidate can independently design, build, test, and implement data products of various complexity with minimal guidance.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, test, and implement data products of various complexity</li>\n<li>Analyze business data stories and translate them into technical requirements</li>\n<li>Work with finance data, banks, or insurance sector</li>\n<li>Collaborate with cross-functional teams to deliver data-driven solutions</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Must have skills:</li>\n<li>DBT - Advanced</li>\n<li>Snowflake - Intermediate</li>\n<li>SQL - Advanced</li>\n<li>Experience in Financial Industry is must</li>\n<li>Agile methodology, Software development, data development</li>\n<li>Python/ PySpark - Intermediate</li>\n<li>Shell Scripting - Intermediate</li>\n<li>Power BI - Intermediate</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits package:</li>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Career development and training opportunities</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Dynamic and inclusive work culture within a globally renowned group</li>\n<li>Private Health Insurance</li>\n<li>Pension Plan</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development</li>\n</ul>\n<p><strong>About Capgemini</strong></p>\n<p>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9d674bed-4cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/u9sZtNc5Dn5iXcrSyEU92H/hybrid-fbs-senior-data-engineer-%3A-finance-domain-in-hyderabad-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["DBT","Snowflake","SQL","Python","PySpark","Shell Scripting","Power BI"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:51:51.894Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad, Telangana, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"DBT, Snowflake, SQL, Python, PySpark, Shell Scripting, Power BI"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a4431994-e0e"},"title":"FBS Sr Data Engineer (Insurance Experience)","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>We seek Data Engineer with a strong P&amp;C insurance background to analyse business data stories and translate them into technical requirements. The ideal candidate can independently design, build, test, and implement data products of various complexity with minimal guidance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Acquire, curate, and publish data for analytical or operational uses</li>\n<li>Ensure data is in a ready-to-use form that creates a single version of the truth across all data consumers</li>\n<li>Translate business analytic requests/requirements into design, development, testing, deployment, and production maintenance tasks</li>\n<li>Work independently with various technologies from big data, relational and non-relational databases, cloud environments, different programming languages, and various reporting tools</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>4-6 years of experience as a Data Engineer with ETL using SQL</li>\n<li>At least 2 years Insurance Background – Ideally P&amp;C, flexible for insurance in other areas Health, Life if upskilling is possible</li>\n<li>Advanced SQL skills</li>\n<li>Advanced DBT/ Informatica skills</li>\n<li>Intermediate Snowflake skills</li>\n<li>Intermediate Power BI skills</li>\n<li>Intermediate Python/ PySpark skills</li>\n<li>Intermediate Shell Scripting skills</li>\n<li>Strong communication and problem-solving skills</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Private Health Insurance</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a4431994-e0e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/i4AFDZ5mxG6cP52mJ3GHJk/remote-fbs-sr-data-engineer-(insurance-experience)-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","DBT/ Informatica","Snowflake","Power BI","Python/ PySpark","Shell Scripting"],"x-skills-preferred":["Agile methodology","Software development","data development","Commercial / Business Insurance"],"datePosted":"2026-03-09T16:51:08.784Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, DBT/ Informatica, Snowflake, Power BI, Python/ PySpark, Shell Scripting, Agile methodology, Software development, data development, Commercial / Business Insurance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_01be118d-100"},"title":"Palantir Engineer Specialist - Sr. Consultant - Principal","description":"<p><strong><strong>About Your Role</strong></strong></p>\n<p>As a Senior Consultant / Principal Consultant – Palantir Engineer, you will lead and deliver end-to-end, data-driven solutions using Palantir Foundry in complex client environments. You will operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions.</p>\n<p><strong><strong>Your role will include:</strong></strong></p>\n<ul>\n<li>Own the end-to-end delivery of Palantir Foundry–based solutions, from problem definition to production</li>\n<li>Design and implement data pipelines and transformations across diverse data sources</li>\n<li>Model data using Foundry Ontology concepts to support analytics and operational use cases</li>\n<li>Build scalable, reliable solutions using Python, SQL, and PySpark within Foundry</li>\n<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>\n<li>Support prototyping, productionisation, and scaling of data-driven applications</li>\n<li>Ensure solutions meet requirements for data quality, governance, security, and performance</li>\n<li>Act as a technical advisor within project teams and contribute to best practices</li>\n</ul>\n<p><strong><strong>Requirements</strong></strong></p>\n<ul>\n<li>Proven experience as a Senior Consultant or Principal Consultant in data, analytics, or platform engineering</li>\n<li>Strong experience delivering client-facing data solutions in complex environments</li>\n<li>Ability to take ownership and work independently in ambiguous problem spaces</li>\n</ul>\n<p><strong><strong>Core Data &amp; Analytics Technology Skills</strong></strong></p>\n<ul>\n<li>Strong programming skills in Python and SQL; PySpark experience required</li>\n<li>Hands-on experience with Palantir Foundry, including:</li>\n<li>Pipeline Builder / Code Workbook</li>\n<li>Data integration and transformation</li>\n<li>Ontology modelling and data lineage</li>\n<li>Solid understanding of data architectures, including data lakes, lakehouses, and data warehouses</li>\n<li>Experience working with APIs, databases, and structured / semi-structured data</li>\n</ul>\n<p><strong><strong>Engineering &amp; Platform Foundations</strong></strong></p>\n<ul>\n<li>Experience building scalable ETL/ELT pipelines</li>\n<li>Familiarity with CI/CD concepts, testing, and production deployments</li>\n<li>Strong focus on solution quality, maintainability, and performance</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong><strong>Nice to have</strong></strong></p>\n<ul>\n<li>Experience with cloud platforms (AWS, Azure, GCP)</li>\n<li>Familiarity with containerisation (Docker, Kubernetes)</li>\n<li>Prior experience as a Palantir FDE or in Foundry-heavy delivery roles</li>\n<li>Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</li>\n</ul>\n<p><strong><strong>Language &amp; Mobility</strong></strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong><strong>Benefits</strong></strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice, you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role, you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>\n<p><strong><strong>About listing company</strong></strong></p>\n<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>\n<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_01be118d-100","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/2u6mMfyRc8Yxg8qmvZBSMX/remote-palantir-engineer-specialist---sr.-consultant---principal-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","Palantir Foundry","Pipeline Builder / Code Workbook","Data integration and transformation","Ontology modelling and data lineage","Data architectures","APIs","Databases","Structured / semi-structured data"],"x-skills-preferred":["Cloud platforms","Containerisation","Palantir FDE","Foundry-heavy delivery roles","Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"],"datePosted":"2026-03-09T16:50:27.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, Palantir Foundry, Pipeline Builder / Code Workbook, Data integration and transformation, Ontology modelling and data lineage, Data architectures, APIs, Databases, Structured / semi-structured data, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_703876d0-bf6"},"title":"Senior Machine Learning Engineer: Ranking","description":"<p><strong>About Us</strong></p>\n<p>Constructor is a U.S. based company that develops a next-generation platform for search and discovery in ecommerce, built to optimize for metrics like revenue, conversion rate, and profit. Our search engine is entirely invented in-house utilizing transformers and generative LLMs, and we use its core and personalization capabilities to power everything from search itself to recommendations to shopping agents.</p>\n<p><strong>About the Team</strong></p>\n<p>The Ranking team, within the Machine Learning chapter, plays a central role in implementing algorithms that optimize our customers&#39; business KPIs like revenue and conversion rates. We focus on metrics over features, supplying our ranking algorithms with powerful capabilities that bring value to our customers.</p>\n<p><strong>Role Details</strong></p>\n<p><strong>Design and Develop ML-Based Ranking Solutions</strong></p>\n<p>As a Machine Learning Engineer on the Ranking team, your primary focus will be to enhance the quality of our ranking systems, ensuring that search, browse, and autocomplete experiences are highly relevant, personalized, and diverse. You will work on building state-of-the-art ranking algorithms that improve user experience and drive critical business metrics such as conversion, user engagement, and revenue growth.</p>\n<p><strong>Improve Ranking Quality</strong></p>\n<p>You will analyze ranking performance and identify gaps in search, browse, and autocomplete experiences, focusing on relevance, personalization, attractiveness, diversification, and other quality signals.</p>\n<p><strong>Innovate and Optimize Ranking Algorithms</strong></p>\n<p>You will proactively propose new machine learning models, algorithms, and features to advance the ranking pipeline, improve ranking quality, and meet evolving business needs.</p>\n<p><strong>Collaboration with Cross-Functional Teams</strong></p>\n<p>You will collaborate with technical and non-technical business partners to develop / update ranking functionalities (both within and outside the team)</p>\n<p><strong>Requirements</strong></p>\n<p><strong>Hard Skills</strong></p>\n<ul>\n<li>At least 4 years of experience with Python for machine learning and backend development</li>\n<li>At least 4 years of experience developing, deploying, and maintaining machine learning models with a strong focus on ranking systems for search, recommendations, or similar applications</li>\n<li>Experience in large-scale ML model training, evaluation, and optimization, with a focus on real-time inference and serving</li>\n<li>Experience with big data frameworks such as Spark for processing large datasets and integrating them into ML pipelines</li>\n<li>Proficiency in using tools like SQL, PySpark, Pandas, and other frameworks to extract, manipulate, and analyze data</li>\n<li>Experience with data pipeline orchestration tools like Airflow or Luigi to manage and automate workflows for ML training and signal delivery</li>\n<li>Experience working on ranking algorithms that optimize metrics such as relevance, conversion rates, personalization, user engagement, RPV is a plus</li>\n</ul>\n<p><strong>Soft Skills</strong></p>\n<ul>\n<li>Experience collaborating in cross-functional teams</li>\n<li>Experience leading projects to success</li>\n<li>Excellent English communication skills</li>\n<li>Enjoy helping others around you grow as developers and be successful</li>\n<li>Pick up new ideas and technologies quickly, love learning and talking to others about them</li>\n<li>Love to experiment and use data and customer feedback to drive decision making</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time</li>\n<li>Fully remote team</li>\n<li>Work from home stipend</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_703876d0-bf6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/C130DBB1DC","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["Python","Machine learning","Backend development","Ranking systems","Search","Recommendations","Big data frameworks","Spark","SQL","PySpark","Pandas","Airflow","Luigi"],"x-skills-preferred":["Transformers","Generative LLMs","Personalization","User experience","Conversion","User engagement","Revenue growth"],"datePosted":"2026-03-09T10:59:16.198Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine learning, Backend development, Ranking systems, Search, Recommendations, Big data frameworks, Spark, SQL, PySpark, Pandas, Airflow, Luigi, Transformers, Generative LLMs, Personalization, User experience, Conversion, User engagement, Revenue growth","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_06fb74fd-d12"},"title":"Senior MLE","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a Senior MLE to join our Machine Learning Recall team. As a Senior MLE, you will help us build and optimize ML/DL models to improve customer experience by providing the best results in terms of relevancy and marginality.</p>\n<p><strong>Role Overview</strong></p>\n<p>In the second part of 2025, we plan to focus our attention on three key areas:</p>\n<ul>\n<li>Recall: we don&#39;t want to lose good results</li>\n<li>Visual solutions: we would like to deliver end-to-end visual solutions for our customer, including (but not limited to) image search, shop the look, visual recommendations, etc</li>\n<li>Technical platform: we have many different technologies/models inside a team, and we would like to allow other teams to use them widely and integrate in their pipelines</li>\n</ul>\n<p><strong>Challenges You Will Tackle</strong></p>\n<ul>\n<li>Build and deploy robust ML systems for search (including text/image &amp; multimodal approaches, etc)</li>\n<li>Tune LLMs to improve our system in different aspects, not limited to what we already have</li>\n<li>Improve business KPIs by using new techniques/models and validating hypotheses</li>\n<li>Collaborate with other technical teams to exchange experiences to improve the overall Constructor.io system</li>\n<li>Be responsible for what you and your team do</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of professional experience in applied machine learning</li>\n<li>Excellent NLP knowledge (especially transformer-based approaches)</li>\n<li>Comprehensive knowledge of classical machine learning</li>\n<li>Extensive Python knowledge</li>\n<li>Experience with any DL framework (we’re using torch)</li>\n<li>Experience with any SQL dialect (we’re using SparkSQL, MySQL and a couple more dialects)</li>\n<li>You have delivered production ML systems</li>\n<li>Proficiency with big data stack for end-to-end ML product development (we’re using Pyspark for most of our pipelines)</li>\n<li>You are able to translate intuition into data-driven hypotheses that result in engineering solutions that bring significant business value</li>\n<li>Proactivity: you can&#39;t close your eyes to problems, but are ready to solve them</li>\n<li>You are friendly and willing to help your teammates &amp; others</li>\n<li>Nice to have:</li>\n</ul>\n<p>+ Experience designing, conducting, and analyzing A/B tests \t+ Experience with Rust (or C/C++) \t+ Experience with a public cloud like AWS, Azure, or GCP \t+ Strong knowledge of data structures, algorithms and their trade-off \t+ Empathy \t+ Ability to explain difficult concepts \t+ You love to work on performance optimization, such as increasing result quality and improving code performance</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>\n<li>Fully remote team - choose where you live</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options - offered in addition to the base salary</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_06fb74fd-d12","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/AA636BFBB2","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["NLP knowledge","Classical machine learning","Python knowledge","DL framework (torch)","SQL dialect (SparkSQL, MySQL)","Big data stack (Pyspark)","Data-driven hypotheses","Proactivity","Friendly and willing to help teammates"],"x-skills-preferred":["Experience designing, conducting, and analyzing A/B tests","Experience with Rust (or C/C++)","Experience with a public cloud like AWS, Azure, or GCP","Strong knowledge of data structures, algorithms and their trade-off","Empathy","Ability to explain difficult concepts","Performance optimization"],"datePosted":"2026-03-09T10:58:38.277Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"NLP knowledge, Classical machine learning, Python knowledge, DL framework (torch), SQL dialect (SparkSQL, MySQL), Big data stack (Pyspark), Data-driven hypotheses, Proactivity, Friendly and willing to help teammates, Experience designing, conducting, and analyzing A/B tests, Experience with Rust (or C/C++), Experience with a public cloud like AWS, Azure, or GCP, Strong knowledge of data structures, algorithms and their trade-off, Empathy, Ability to explain difficult concepts, Performance optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_647aab1e-986"},"title":"BackEnd Engineer: Experiments","description":"<p><strong>About the Role</strong></p>\n<p>This is a backend role (Python services + data-heavy systems) where you&#39;ll build and maintain services for experiment assignment, logging, and report generation. You&#39;ll improve scalability, performance, and reliability of experiment reporting pipelines, add product-facing features to the UI, and take technical ownership of projects.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and maintain backend services for experiment assignment, logging, and report generation</li>\n<li>Improve scalability, performance, and reliability of experiment reporting pipelines</li>\n<li>Add product-facing features to the UI to help users launch experiments and interpret results</li>\n<li>Take technical ownership of projects: shape solutions, break down work, and drive execution with the team</li>\n<li>Participate in a weekly on-call rotation (investigating occasional issues and answering internal questions)</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong experience building and owning production backend services in Python</li>\n<li>Experience with monitoring, alerting, and debugging user-facing systems</li>\n<li>Strong SQL skills and confidence working with data-heavy systems (metrics, logging, analytics)</li>\n<li>Ability to ship maintainable software (tests, code review, incremental delivery)</li>\n</ul>\n<p><strong>Nice-to-have</strong></p>\n<ul>\n<li>Experience with experimentation systems, metrics platforms, or A/B testing workflows</li>\n<li>Experience with PySpark and/or Databricks</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time</li>\n<li>Fully remote team</li>\n<li>Work from home stipend</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_647aab1e-986","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/9C931E93D0","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["Python","FastAPI","PostgreSQL","Plotly Dash","PySpark/Databricks","AWS","CloudWatch","Sentry"],"x-skills-preferred":["experimentation systems","metrics platforms","A/B testing workflows","PySpark","Databricks"],"datePosted":"2026-03-09T10:58:22.242Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, FastAPI, PostgreSQL, Plotly Dash, PySpark/Databricks, AWS, CloudWatch, Sentry, experimentation systems, metrics platforms, A/B testing workflows, PySpark, Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64038c66-5fd"},"title":"Analytics Engineer: Offsite Search Optimisation","description":"<p><strong>About Us</strong></p>\n<p>Constructor is a search and discovery platform for ecommerce, built to optimize for metrics like revenue, conversion rate, and profit. Our search engine is entirely invented in-house using transformers and generative LLMs.</p>\n<p><strong>Role Details</strong></p>\n<p>As an Analytics Engineer on the Offsite Search Optimization team, you will improve the e-commerce experience for hundreds of millions of users across the world by making it faster and more personalized. The team&#39;s mission is to bridge the gap between onsite product discovery and external discovery platforms, ensuring that Constructor-powered websites can be found, understood, and correctly represented by both Google and Generative engines.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build measurable foundations to track visibility, traffic sources, and performance across SEO and GEO.</li>\n<li>Build the architecture to run technical checkups on customer websites, assess their optimisation level, and provide clear recommendations on what needs to be fixed.</li>\n<li>Partner with Product teams to adapt enriched content for SEO/GEO and package it toward external discoverability needs.</li>\n<li>Define an SEO/GEO enablement layer with tools, playbooks, and frameworks to scale best practices across teams and customers.</li>\n</ul>\n<p><strong>Challenges You Will Tackle</strong></p>\n<ul>\n<li>Complete visibility of the product discovery from external sources.</li>\n<li>Accurate measurement of organic, branded, AI-driven traffic.</li>\n<li>Create a unified model of landing pages.</li>\n<li>A framework for future SEO experiments (canonical tests, template variations, structured data tests).</li>\n<li>Internal tools that other teams can plug into.</li>\n<li>Run the adoption process of best practices to other teams</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong SQL and Python (requests, pandas, numpy, etc).</li>\n<li>Experience building data integrations between products.</li>\n<li>Experience with APIs &amp; auth (OAuth2, GA4 and Google Search Analytics API).</li>\n<li>Understanding of ETL/ELT workflows on PySpark.</li>\n<li>Building logic of data extraction from non-structured data (referrers, canonical URLs, page types).</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with BigQuery or similar.</li>\n<li>JS execution basics (how SSR/CSR affects crawlers).</li>\n<li>Knowledge of SEO fundamentals.</li>\n<li>Experience with BI systems (Looker/Metabase/Superset).</li>\n<li>Experience with logs or event handling systems</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Work with smart and empathetic people who will help you grow and make a meaningful impact.</li>\n<li>Regular team offsite events to connect and collaborate.</li>\n<li>Fully remote team - choose where you live.</li>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>\n<li>Apple laptops provided for new employees.</li>\n<li>Training and development budget for every employee, refreshed each year.</li>\n<li>Maternity &amp; Paternity leave for qualified employees.</li>\n<li>Base salary: $80k–$120K USD, depending on knowledge, skills, experience, and interview results.</li>\n<li>Stock options - offered in addition to the base salary</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64038c66-5fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/3D5CFD97C6","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120K USD","x-skills-required":["SQL","Python","requests","pandas","numpy","APIs & auth","OAuth2","GA4","Google Search Analytics API","ETL/ELT workflows on PySpark","data extraction from non-structured data"],"x-skills-preferred":["BigQuery","JS execution basics","SEO fundamentals","BI systems","logs or event handling systems"],"datePosted":"2026-03-09T10:57:54.786Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, requests, pandas, numpy, APIs & auth, OAuth2, GA4, Google Search Analytics API, ETL/ELT workflows on PySpark, data extraction from non-structured data, BigQuery, JS execution basics, SEO fundamentals, BI systems, logs or event handling systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c9975749-904"},"title":"Senior Applied Scientist","description":"<p>As a Senior Applied Scientist in the Multimedia Team, you will redefine how millions of users discover, consume, and create visual content. You will be at the heart of Bing Visual Search, Bing Image Creator, and our vast video indexing engine. Your mission is to build intelligent systems that understand the deep semantics of pixels and frames, enabling world-class image and video experiences that are fast, relevant, and inspiring.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities\nVisual Intelligence Development: Build and deploy SOTA machine learning models for image classification, object detection, and video action recognition to power Bing’s multimedia features.</p>\n<p>Multimodal &amp; Generative AI: Lead the development of multimodal embeddings that align text and visual data, and leverage Generative AI (e.g., DALL-E, MAI models) to enhance content creation tools.</p>\n<p>Scale &amp; Optimization: Design robust feature-engineering pipelines to process billions of images and videos, ensuring low-latency inference in production services.</p>\n<p>Strategic Leadership: Embody Microsoft’s values by Creating Clarity in complex AI problems and Generating Energy across cross-functional teams of engineers and PMs.</p>\n<p>Responsible AI: Ensure all visual models adhere to strict Security, Privacy, and GDPR standards, specifically focusing on content moderation and bias detection in multimedia.</p>\n<p>Qualifications\nRequired Qualifications: Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 2+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</p>\n<p>Mastery of Python and deep learning frameworks such as PyTorch or TensorFlow. Proven track record in Computer Vision (CV) or Multimedia Understanding, including work with large-scale visual datasets. Experience building and deploying live production systems at scale.</p>\n<p>Preferred Qualifications: PhD focused on Computer Vision, Video Analytics, or Multimodal Learning. Experience with big data tools like Spark/PySpark and Azure Machine Learning. Publications in top-tier venues such as CVPR, ICCV, or ACM Multimedia.</p>\n<p>This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c9975749-904","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-14/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","PyTorch","TensorFlow","Computer Vision","Multimedia Understanding","Large-scale visual datasets","Live production systems"],"x-skills-preferred":["PhD in Computer Vision","Video Analytics","Multimodal Learning","Spark/PySpark","Azure Machine Learning","CVPR","ICCV","ACM Multimedia"],"datePosted":"2026-03-08T22:14:34.671Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Noida"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, Computer Vision, Multimedia Understanding, Large-scale visual datasets, Live production systems, PhD in Computer Vision, Video Analytics, Multimodal Learning, Spark/PySpark, Azure Machine Learning, CVPR, ICCV, ACM Multimedia"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1b059610-8db"},"title":"Software Engineer II","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Software Engineer II at their Redmond office. This role sits at the heart of software development, turning code into maintainable, extensible software that is resilient to change. You&#39;ll work directly with leadership to shape the company&#39;s direction in the software development space.</p>\n<p><strong>About the Role</strong></p>\n<p>The Software Engineer II will contribute to the design and architecture of software solutions, create design documents, and ensure alignment with security, privacy, and compliance requirements. They will implement maintainable, extensible code and participate in reviews that uphold Microsoft engineering standards. The role will also involve developing and refining test plans, integrating automation, and ensuring robust test coverage for backend services.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Understand User Requirements – Collaborate with product managers and technical leads to clarify requirements and incorporate continuous feedback loops.</li>\n<li>Design and Architecture – Contribute to solution architecture, create design documents, and ensure alignment with security, privacy, and compliance requirements.</li>\n<li>Coding and Code Quality – Implement maintainable, extensible code and participate in reviews that uphold Microsoft engineering standards.</li>\n<li>Testing and Automation – Develop and refine test plans, integrate automation, and ensure robust test coverage for backend services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in AI/ML frameworks such as PyTorch or TensorFlow and practical experience applying Data Science techniques.</li>\n<li>Experience in big data systems such as Spark/PySpark or Stream Processing Systems.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Model a growth mindset by learning from others and sharing your learnings with others.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1b059610-8db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii/","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","Python","AI/ML","Data Science","Big Data"],"x-skills-preferred":["PyTorch","TensorFlow","Spark/PySpark","Stream Processing Systems"],"datePosted":"2026-03-06T07:26:26.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, Python, AI/ML, Data Science, Big Data, PyTorch, TensorFlow, Spark/PySpark, Stream Processing Systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e330a898-308"},"title":"Data Engineer","description":"<p><strong>What you&#39;ll do</strong></p>\n<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>\n<ul>\n<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>\n<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>\n<li>You will implement data models aligned with KPI monitoring requirements.</li>\n<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>\n<li>You will implement audit and logging mechanisms for sensitive data.</li>\n<li>You will document data flows, architecture, and operational procedures.</li>\n<li>You will collaborate with international project teams</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>\n<li>You have 3+ years of proven experience in data engineering projects.</li>\n<li>You have strong skills in Python, SQL, and PySpark.</li>\n<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>\n<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>\n<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>\n<li>You have a good understanding of data security and compliance for sensitive information.</li>\n<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>\n<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e330a898-308","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Services GmbH","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18980","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","AWS services","data modeling","KPI reporting","data security","DevOps practices"],"x-skills-preferred":["SAP HANA","Java","IoT in the automotive domain"],"datePosted":"2025-12-08T16:38:07.363Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Timisoara"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, AWS services, data modeling, KPI reporting, data security, DevOps practices, SAP HANA, Java, IoT in the automotive domain"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a0ca0eaa-e37"},"title":"Data Engineer","description":"<p><strong>What you&#39;ll do</strong></p>\n<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>\n<ul>\n<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>\n<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>\n<li>You will implement data models aligned with KPI monitoring requirements.</li>\n<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>\n<li>You will implement audit and logging mechanisms for sensitive data.</li>\n<li>You will document data flows, architecture, and operational procedures.</li>\n<li>You will collaborate with international project teams</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>\n<li>You have 3+ years of proven experience in data engineering projects.</li>\n<li>You have strong skills in Python, SQL, and PySpark.</li>\n<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>\n<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>\n<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>\n<li>You have a good understanding of data security and compliance for sensitive information.</li>\n<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>\n<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a0ca0eaa-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Services GmbH","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18979","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","data modeling","KPI reporting","AWS services","monitoring frameworks","data security","DevOps practices"],"x-skills-preferred":["SAP HANA","Java","IoT"],"datePosted":"2025-12-08T16:37:58.711Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, data modeling, KPI reporting, AWS services, monitoring frameworks, data security, DevOps practices, SAP HANA, Java, IoT"}]}