<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>bc790e73-e99</externalid>
      <Title>Member of the Compute Tools team</Title>
      <Description><![CDATA[<p>As a member of the Compute Tools team, you&#39;ll design and build the systems that enable Palantirians to succeed. You&#39;ll be the expert and owner of our internal platforms, helping to leverage a combination of Palantir&#39;s own products and open source tools to provide stable and flexible services for a constantly evolving set of use cases. We&#39;re responsible for making architectural decisions that enable sustainable operations in light of that evolving usage.</p>
<p><strong>Core Responsibilities</strong></p>
<ul>
<li>Apply modern engineering practices to improve the maintainability, reliability, and utility of Palantir&#39;s internal compute infrastructure.</li>
<li>Drive build-vs-buy decisions and partner with vendors to integrate external technologies into our platform.</li>
<li>Collaborate with other teams to understand emerging needs and deliver solutions that help users succeed.</li>
<li>Participate in the on-call rotation for high-severity incidents affecting critical systems.</li>
<li>Research and evaluate new technologies to identify where our infrastructure can improve.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Systems programming experience with strong proficiency in Go, Python, or Rust.</li>
<li>Deep familiarity with containers (Docker) and orchestration (Kubernetes).</li>
<li>Experience working with a cloud provider (AWS/Azure/GCE), or sysadmin/SRE experience in data centers.</li>
<li>Up to date with modern industry practices and open-source advancements.</li>
<li>Solid understanding of distributed systems, APIs, cloud platforms, and on-prem infrastructure.</li>
<li>Hands-on experience with CI/CD pipelines, DevOps practices, and system reliability principles.</li>
<li>Willingness and eligibility to obtain a U.S. security clearance.</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>3+ years of professional software development experience on core infrastructure with emphasis on operational excellence.</li>
<li>2+ years of experience contributing to the system design or architecture (architecture, design patterns, reliability and scaling) of new and existing systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Go, Python, Rust, Docker, Kubernetes, AWS, Azure, GCE, CI/CD pipelines, DevOps practices, system reliability principles</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/51ea4a3b-7764-4c87-96e4-310e19c856d5?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>e42b2669-9fe</externalid>
      <Title>Senior Engineering Manager - Factory Systems</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and driven Engineering Manager to lead the development of Connected Factory, the software and infrastructure foundation that powers our next-generation manufacturing ecosystem.</p>
<p>This role is responsible for architecting, building, and scaling the digital backbone that connects people, robots, and systems across our smart factories. This includes building out the teams digital asset management toolset, the IoT name space and devices, edge nodes for automated executable, and the cloud processing of this data.</p>
<p>The ideal candidate combines deep technical expertise in infrastructure, automation, and data systems with the ability to lead high-performing software and infrastructure teams. This person will play a critical role in enabling real-time data flow, automation, and coordination across all layers of the factory floor.</p>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>Lead and mentor a multidisciplinary team of software and infrastructure engineers building core Connected Factory services.</li>
<li>Define system architectures that support smart manufacturing at scale, including edge computing, data pipelines, and API integrations between machines and enterprise systems.</li>
<li>Partner with Automation, Manufacturing, and IT teams to ensure seamless integration between software, robotics, and operational technologies.</li>
</ul>
<p><strong>Infrastructure &amp; Systems Development</strong></p>
<ul>
<li>Design and deploy compute infra to support real-time data exchange across thousands of devices and sensors on the factory floor.</li>
<li>Implement high-reliability networks, message brokers, and APIs that facilitate safe and secure communication between machines, humans, and cloud services.</li>
<li>Develop tools and services that enable simulation, orchestration, and monitoring of production environments.</li>
</ul>
<p><strong>Smart Factory &amp; Industry 4.0 Integration</strong></p>
<ul>
<li>Ensure systems align with Industry 4.0 and IoT interoperability standards (e.g., OPC UA, MQTT).</li>
<li>Oversee deployment of edge nodes, data collection systems, and digital twins that optimize throughput and predictive maintenance.</li>
<li>Collaborate with automation and robotics teams to enable human-machine collaboration and efficient factory operations.</li>
</ul>
<p><strong>Operational Excellence &amp; Collaboration</strong></p>
<ul>
<li>Work closely with Manufacturing and Supply Chain to translate operational needs into scalable software solutions.</li>
<li>Drive best practices in code quality, reliability, and observability across the FactoryOS stack.</li>
<li>Champion a culture of ownership, continuous improvement, and technical excellence.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>8+ years of experience in software or dev ops engineering, with at least 3 years in a technical leadership or management role.</li>
<li>Proven experience building or supporting industrial or manufacturing systems, with strong knowledge of automation, controls, and factory networking.</li>
<li>Expertise in data infrastructure, API design, IoT protocols, and edge computing architectures.</li>
<li>Experience deploying and managing systems that bridge IT and OT domains, ensuring reliability, security, and scalability.</li>
<li>Familiarity with robotic systems, industrial automation, and real-time data streaming technologies.</li>
<li>Familiarity with data collection, processing, and reporting for factory infrastructure.</li>
<li>Strong understanding of distributed systems, container orchestration (e.g., Kubernetes), and DevOps practices.</li>
</ul>
<p><strong>Salary</strong></p>
<p>The salary range for this role is $220,000-$292,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>software engineering, dev ops engineering, industrial automation, factory networking, data infrastructure, API design, IoT protocols, edge computing architectures, distributed systems, container orchestration, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4972007007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>6c19998a-517</externalid>
      <Title>Arquiteto Produto Digital</Title>
      <Description><![CDATA[<p>We are seeking a highly qualified and experienced Solutions Architect to join our dynamic Engineering Team. As a key member of our team, you will play a crucial role in defining the technical direction of our products, focusing on modernisation, integration, and scalable solutions. You will work closely with Product Managers, engineering teams, and SREs to deliver robust and future-proof architectures that directly support our business objectives.</p>
<p>Responsibilities:</p>
<ul>
<li>Architect and lead system migrations: Design and oversee the strategic migration of existing legacy systems to modern platforms, ensuring minimal disruption and maximum efficiency.</li>
<li>Design and implement modern integrations: Architect and implement continuous integrations between new and legacy systems, focusing on modernisation, scalability, and proactively mitigating technical debt.</li>
<li>Architect resilient distributed systems: Design and build resilient, efficient, and scalable distributed systems to meet product requirements.</li>
<li>Develop integration hubs and gateways: Lead the creation and management of integration hubs and API gateways to facilitate robust and secure communication across the entire system.</li>
<li>Provide technical leadership and guidance: Offer specialised technical guidance to engineering teams, delegating tasks effectively and ensuring alignment with overall business objectives.</li>
<li>Collaborate on technical roadmaps: Work closely with Product Managers to define and refine the technical roadmap, translating product vision into actionable architectural plans.</li>
<li>Document application architectures: Create comprehensive and clear documentation for application architectures, design patterns, and technical decisions.</li>
<li>Implement monitoring and alerting strategies: Work with SREs and Tech Leads to implement robust monitoring, logging, and alerting strategies for all product components.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced English language proficiency</li>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>Experience with microservices architecture and event-driven systems</li>
<li>Familiarity with DevOps practices and CI/CD pipelines</li>
<li>Certification in relevant cloud platforms (GCP/AWS, etc.)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>microservices architecture, event-driven systems, DevOps practices, CI/CD pipelines, cloud platforms (GCP/AWS), English language proficiency</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford is an American multinational automaker that designs, manufactures, markets, and services a full line of passenger and commercial vehicles, including cars, trucks, vans, and SUVs.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59869?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sao Paulo</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3152771e-29b</externalid>
      <Title>Senior Software Developer- Lead Developer(Core Banking)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Developer to lead the development of our core banking system, which will be built on a cloud-native foundation using Google Cloud Platform (GCP). As a key member of our team, you will design, develop, and deliver mission-critical banking services across our ecosystem.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and developing core banking capabilities, including accounts, transactions, ledgers, interest calculations, and operational workflows.</li>
<li>Leading the design and implementation of services and integrations across Fiserv DNA, Create Digital, Nautilus, and related banking platforms.</li>
<li>Designing and implementing robust integration layers across core banking, Fiserv platforms, digital channels, and enterprise systems using APIs, events, and file-based patterns.</li>
<li>Building highly available, secure, and scalable services using Google Cloud Platform, including GKE/Cloud Run, Pub/Sub, Cloud SQL/PostgreSQL, Secret Manager, and Cloud Logging/Monitoring.</li>
<li>Defining and implementing API standards as REST, including idempotency, versioning, and performance considerations across the banking ecosystem.</li>
<li>Developing modular services using Java and Spring Boot, leveraging domain-driven design and well-bounded contexts.</li>
<li>Building secure-by-default services for a regulated financial environment, including PII protection, encryption, audit trails, and least-privilege IAM.</li>
<li>Contributing through hands-on coding, design reviews, and mentoring. Establishing best practices for TDD, CI/CD pipelines, and automated quality gates.</li>
<li>Defining SLIs/SLOs and implementing logging, monitoring, and distributed tracing. Leading root-cause analysis and driving reliability improvements.</li>
<li>Translating product requirements into scalable technical designs and iterative delivery milestones while managing technical debt.</li>
<li>Working closely with product, architecture, security, and vendor partners, including Fiserv and other strategic platform providers, to align solutions with platform capabilities and business outcomes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field. Master&#39;s degree preferred.</li>
<li>8+ years of professional software engineering experience.</li>
<li>3+ years in a technical lead or senior developer role delivering large-scale, mission-critical systems.</li>
<li>Proven experience building or modernizing core banking systems or similar financial platforms such as ledgers, payments, lending, or deposits, where auditability and transactional correctness are critical.</li>
<li>Strong hands-on expertise in Java / J2EE and Spring Boot, including Spring Security, Spring Data, and API design.</li>
<li>Strong experience building integration layers and APIs across enterprise systems.</li>
<li>Experience designing and operating cloud-native applications on GCP or AWS.</li>
<li>Experience working with core banking platforms and integration patterns. Experience with Fiserv DNA, Create Digital, Nautilus, or similar banking platforms is strongly preferred.</li>
<li>Strong understanding of asynchronous processing and high-volume transaction systems.</li>
<li>Experience with relational databases such as PostgreSQL/SQL, data modeling, and transactional integrity.</li>
<li>Experience building secure systems that handle PII and financial data, including encryption and secure SDLC practices.</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Deep experience with Fiserv technologies, including DNA, Create Digital, Nautilus, or adjacent platform components.</li>
<li>Experience with event streaming platforms such as Kafka or GCP Pub/Sub.</li>
<li>Experience with file-based and batch integrations such as SFTP and enterprise file gateways like GECHub.</li>
<li>Experience with identity and security patterns such as OAuth2 and SAML.</li>
<li>Familiarity with observability tools for logging, tracing, and monitoring.</li>
<li>Experience with CI/CD pipelines and DevOps practices.</li>
<li>Experience working in regulated financial environments.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Immediate medical, dental, vision and prescription drug coverage</li>
<li>Flexible family care days, paid parental leave, new parent ramp-up programs, subsidized back-up child care and more</li>
<li>Family building benefits including adoption and surrogacy expense reimbursement, fertility treatments, and more</li>
<li>Vehicle discount program for employees and family members and management leases</li>
<li>Tuition assistance</li>
<li>Established and active employee resource groups</li>
<li>Paid time off for individual and team community service</li>
<li>A generous schedule of paid holidays, including the week between Christmas and New Year’s Day</li>
<li>Paid time off and the option to purchase additional vacation time.</li>
</ul>
<p>Salary: This position is a salary grade 8 and ranges from $113,580-190,500.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,580-190,500</Salaryrange>
      <Skills>Java, Spring Boot, API design, Cloud-native applications, Google Cloud Platform, Fiserv DNA, Create Digital, Nautilus, PostgreSQL, SQL, Data modeling, Transactional integrity, Secure systems, PII protection, Encryption, Audit trails, Least-privilege IAM, Fiserv technologies, Event streaming platforms, File-based and batch integrations, Identity and security patterns, Observability tools, CI/CD pipelines, DevOps practices, Regulated financial environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Ford Motor Credit Company</Employername>
      <Employerlogo>https://logos.yubhub.co/fordcredit.com.png</Employerlogo>
      <Employerdescription>Ford Motor Credit Company is a leading provider of automotive financing and leasing services, serving customers across over 100 countries worldwide.</Employerdescription>
      <Employerwebsite>https://www.fordcredit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62545?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>0ec389e5-2bd</externalid>
      <Title>Staff AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who think holistically, automate relentlessly, and are fluent in the fast-moving world of AI tooling and infrastructure,but grounded in focused engineering principles.</p>
<p>Our AI Acceleration organization is building high-impact AI-powered applications that deliver real business value at speed. As a Staff AI Engineer, you&#39;ll play a critical role in designing, building and deploying scalable AI-powered applications through proven software engineering completion combined with pragmatic use of modern Data Science and AI capabilities.</p>
<p>This is a role for top-tier engineers who are excited about applying AI in practical and scalable ways. We&#39;re looking for strong technical leaders who thrive at the intersection of well-adapted software development and modern AI application.</p>
<p>You should be comfortable working across the full lifecycle of a product,from ideation, architecture and data modelling to deployment, automation and operations,while navigating ambiguity and driving toward execution. Strong systems thinking, ownership mindset, and the ability to ship value fast are crucial.</p>
<p>You will work closely with engineers, data scientists, product managers, and business stakeholders to define problems, shape solutions, and ensure models perform reliably in the real world.</p>
<p>If you&#39;re passionate about building AI solutions that go beyond prototypes,solutions that are engineered for scale, reliability, and real-world value,AI Acceleration is the team for you.</p>
<p><strong>Job Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain production-grade AI applications and services using modern software engineering practices (CI/CD, testing, observability, cloud-native design).</li>
<li>Define and implement foundational platforms (e.g., conversational bots, AI-powered search, unstructured data processing, GenBI) that are reusable and scalable across the enterprise.</li>
<li>Lead architectural decisions, bringing standard processes in software development lifecycle, explainable, and responsible AI.</li>
<li>Lead multi-functional team initiatives,embedded projects with business stakeholders,to rapidly build and deploy AI solutions that tackle high-priority problems.</li>
<li>Evaluate and integrate existing AI tools, frameworks, and APIs (e.g., LLMs, vector DBs, retrieval-augmented generation) into robust applications.</li>
<li>Champion automation in workflows,from data ingestion and preprocessing to model integration and deployment. Define their success criterias, metrics and standard operation procedures.</li>
<li>Partner with data scientists, product managers, and other engineers to ensure end-to-end delivery and reliability of AI products.</li>
<li>Stay ahead of with emerging AI technologies but prioritize practical application and delivery over experimental research.</li>
<li>Chip in to the internal knowledge base, tooling libraries, and documentation to scale engineering practices across the organization.</li>
<li>Mentor other engineers and data scientists and provide technical leadership across projects, helping set the standard for rigor and impact.</li>
</ul>
<p><strong>Job Qualifications</strong></p>
<ul>
<li>Required:</li>
<li>7+ years of professional software engineering experience; ability to independently design and ship complex systems in production.</li>
<li>Strong programming skills in Python (preferred), Java, or similar languages, with experience in developing microservices, APIs, and backend systems.</li>
<li>Solid understanding of software architecture, cloud infrastructure (AWS, Azure, or GCP), and modern DevOps practices.</li>
<li>Experience integrating machine learning models into production systems (e.g., LLMs via APIs, fine-tuning, RAG patterns, embeddings, agents and crew of agents etc.).</li>
<li>Experience with large language models (LLMs), vector-based search, retrieval-augmented generation (RAG), or unstructured data processing.</li>
<li>Ability to move quickly while maintaining code quality, test coverage, and operational excellence.</li>
<li>Strong problem-solving skills and a bias for action, with the ability to navigate ambiguity and lead through complexity.</li>
<li>Strong experience with technical mentorship and cross-team influence.</li>
<li>Ability to translate complex technical ideas into clear business insights and communicate effectively with cross-functional partners.</li>
<li>Preferred:</li>
<li>Familiarity with AI/ML tools such as LangChain, Haystack, Hugging Face, Weaviate, or similar ecosystems.</li>
<li>Experience using GenAI frameworks such as LlamaIndex, Crew AI, AutoGen, or similar agentic/LLM orchestration toolkits.</li>
<li>Experience building reusable modeling components or contributing to internal ML platforms.</li>
<li>Background in working with embedded teams or in forward-deployed environments where rapid iteration and close business collaboration are key.</li>
<li>Proficiency in Python and common ML/data science libraries (e.g., scikit-learn, pandas, NumPy, PyTorch, TensorFlow).</li>
<li>Solid knowledge of machine learning fundamentals, including supervised and unsupervised learning, model evaluation, and statistical inference.</li>
<li>Exposure to working with unstructured data (documents, conversations, images) and transforming it into usable structured formats.</li>
<li>Experience building chatbots, search systems, or generative AI interfaces.</li>
<li>Background in working within platform engineering or internal developer tools teams.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Software architecture, Cloud infrastructure, DevOps practices, Machine learning, Large language models, Vector-based search, Retrieval-augmented generation, Unstructured data processing, LangChain, Haystack, Hugging Face, Weaviate, LlamaIndex, Crew AI, AutoGen, scikit-learn, pandas, NumPy, PyTorch, TensorFlow</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Bp</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.bp.com.png</Employerlogo>
      <Employerdescription>bp is a multinational oil and gas company that delivers energy to the world, today and tomorrow.</Employerdescription>
      <Employerwebsite>https://careers.bp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.bp.com/job-description/RQ109869?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>India, Pune</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>94fdb80f-cee</externalid>
      <Title>Cloud Data Engineer</Title>
      <Description><![CDATA[<p>Part of The Brandtech Group, fifty-five is a data consultancy helping brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer experience.</p>
<p>As part of the company&#39;s continued expansion into cloud services in APAC, we are hiring a Cloud Data Engineer to join our Taipei team.</p>
<p>This person will work closely with our clients in Taiwan and the region, collaborating with both our local and global engineering teams.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement data architectures and pipelines for cloud and digital analytics projects on cloud platforms</li>
<li>Deliver hands-on technical services including cloud migration, data transformation, data warehousing, visualization, and advanced analytics</li>
<li>Set up CI/CD pipelines and deployment workflows to ensure proper integration of cloud infrastructure and data pipelines</li>
<li>Streamline and automate processes to optimize performance and cost-efficiency for digital analytics platforms</li>
<li>Support pre-sales activities with local consultants (e.g. demo development, RFP contribution, technical solutioning)</li>
<li>Collaborate with Global Engineering team to develop and deliver POCs for cloud and data-related use cases</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>University degree in Computer Science, Information Systems, or related disciplines</li>
<li>Minimum 1 year of experience with cloud data platforms (GCP preferred; AWS or Azure also welcome)</li>
<li>Familiar with data engineering concepts and tools (e.g. BigQuery, Dataflow, Pub/Sub, Airflow, etc.)</li>
<li>Proficient in one or more programming languages (e.g. Python, Java)</li>
<li>Knowledge of API design, microservices, and DevOps practices (CI/CD, version control, containerization)</li>
<li>Good understanding of data analytics, data warehousing, and visualization (e.g. Looker, Data Studio, Tableau)</li>
<li>Experience with website or mobile app tracking implementation is a plus</li>
<li>Professional cloud certification (GCP, AWS, or Azure) is a plus</li>
<li>Able to communicate technical concepts clearly to non-technical stakeholders</li>
<li>Strong problem-solving skills, self-driven, and collaborative</li>
<li>Fluent in English and Mandarin Chinese</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Exposure to cloud automation, marketing platforms, and media data analytics projects</li>
<li>Opportunity to work with our global consulting and engineering teams to engage our clients from diverse industries around the world</li>
<li>20 days Annual Leave</li>
<li>Work remotely (Maximum 2 days a week Work From Home policy)</li>
<li>Regular team activities including TGIF, team lunch and Off-site!</li>
<li>A multicultural environment with employees from over 20 countries</li>
<li>Values centered on excellence, caring and sharing</li>
<li>Continuous (and certified) training on the digital ecosystem and technologies (initial training for all new employees, followed by ongoing training sessions, etc.)</li>
<li>Particular importance given to work-life balance and the right to disconnect</li>
<li>Work-life balance and strong support for well-being</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud data platforms, BigQuery, Dataflow, Pub/Sub, Airflow, Python, Java, API design, microservices, DevOps practices, data analytics, data warehousing, visualization, website or mobile app tracking implementation, professional cloud certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fifty-five</Employername>
      <Employerlogo>https://logos.yubhub.co/fifty-five.com.png</Employerlogo>
      <Employerdescription>fifty-five is a data consultancy helping brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer experience. It has over 300 employees globally.</Employerdescription>
      <Employerwebsite>https://www.fifty-five.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/txdbDrb5JndzD9ytk688xh/hybrid-cloud-data-engineer---taiwan-in-taipei-at-fifty-five?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Taipei</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8bdccb70-cb1</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join us at Electronic Arts, where you will help inspire the world to play by building data solutions that empower game creators and reach millions globally. Reporting to the Director of Engineering, you will work with our Game Developer Experience team to design, develop, and maintain modern data pipelines on our Azure platform. You will collaborate with us to transform raw data into actionable insights, supporting analytics and real-time reporting for our iconic franchises. Our inclusive culture values your unique perspective and encourages continuous learning and growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and maintain data pipelines using Azure Data Factory, Data Lake, and Power BI for our game development teams.</li>
<li>Collaborate with us and data consumers to model, ingest, and improve data from multiple sources, ensuring reliable and accessible datasets.</li>
<li>Guide partners in best practices for data reliability, performance, and impactful reporting using Power BI.</li>
<li>Participate in a production support rotation every 3-5 weeks, helping us ensure smooth operations and continuous improvement.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>2+ years of experience with Azure Data technologies, including Data Factory, Data Lake, Data Explorer, or Power BI.</li>
<li>2+ years of Azure infrastructure management experience.</li>
<li>2+ years of database development, queries, and ETL processes.</li>
<li>2+ years of experience with DevOps practices such as Terraform, CI/CD systems, or observability platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Azure Data Factory, Azure Data Lake, Power BI, Azure infrastructure management, database development, ETL processes, DevOps practices, Terraform, CI/CD systems, observability platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher headquartered in Redwood City, California. It has a diverse portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer/213642?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Orlando</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8317ba42-502</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>
<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
</ul>
<ul>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
</ul>
<ul>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
</ul>
<ul>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
</ul>
<ul>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
</ul>
<ul>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
</ul>
<ul>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
</ul>
<ul>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
</ul>
<ul>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
</ul>
<ul>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>
</ul>
<ul>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
</ul>
<ul>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
</ul>
<ul>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
</ul>
<ul>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
</ul>
<ul>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
</ul>
<ul>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
</ul>
<ul>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8041698002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6b8d0e9-04e</externalid>
      <Title>Salesforce Manager, CRM Systems</Title>
      <Description><![CDATA[<p>As a Salesforce Engineering Manager at GitLab, you will lead the architectural vision and technical roadmap for our Salesforce platform and integrated go-to-market applications. You&#39;ll manage and mentor a team of Salesforce engineers while partnering closely with stakeholders across Sales, Marketing, Customer Experience, and Operations to translate business needs into a prioritized, high-impact engineering backlog.</p>
<p>A key part of this role is balancing long-term platform health with near-term business needs, while driving operational excellence through strong sprint management, clear delivery expectations, and continuous improvement. You&#39;ll also champion the integration of AI-native solutions across our operations and go-to-market systems and within team workflows, helping GitLab scale efficiently.</p>
<p>This role includes leading large, complex programs that drive business transformation, ensuring our platform remains scalable, secure, and compliant as we grow. Some examples of our projects:</p>
<ul>
<li>Building and evolving a scalable Salesforce architecture across integrated go-to-market applications</li>
<li>Advancing Salesforce DevOps practices (source control, continuous integration, and release management) and platform governance</li>
<li>Designing and delivering advanced Salesforce solutions and integrations with other critical business systems</li>
<li>Introducing AI-native capabilities and automation to improve system workflows and team productivity</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a team of Salesforce engineers, supporting career growth through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision and technical roadmap for GitLab&#39;s Salesforce platform and integrated go-to-market applications, with a focus on scalability, performance, security, and compliance.</li>
<li>Champion the integration of AI-native solutions within operations and go-to-market systems and within engineering workflows to improve efficiency and unlock new capabilities.</li>
<li>Partner with cross-functional stakeholders (Sales, Marketing, Customer Experience, and Operations) to translate business needs into a prioritized engineering backlog and delivery plan.</li>
<li>Provide technical leadership on complex challenges by contributing to solution design, reviewing code, and guiding implementation across the Salesforce ecosystem.</li>
<li>Own operational excellence for the team, including sprint planning, capacity management, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices, including source control, continuous integration and continuous deployment, release management, code quality, and platform governance.</li>
<li>Lead large-scale programs and integrations across Salesforce and other key business systems, introducing automation and process improvements to help GitLab scale.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of progressive experience in Salesforce development and architecture, building scalable solutions that support go-to-market systems.</li>
<li>2+ years of experience managing or leading technical teams, with a track record of coaching, giving actionable feedback, and growing team members.</li>
<li>Strong proficiency with Salesforce technologies including Apex, Lightning Web Components, Visualforce, and SOQL, and the ability to guide design and code review decisions.</li>
<li>Strong command of Salesforce DevOps practices, including Git-based source control, continuous integration and continuous delivery (CI/CD), and reliable release management.</li>
<li>Experience designing and overseeing integrations between Salesforce and other business systems, including using integration platform as a service (iPaaS) tools and automation solutions.</li>
<li>Ability to translate stakeholder needs into a prioritized engineering backlog, balancing long-term platform health with near-term business outcomes.</li>
<li>Excellent communication and relationship-building skills, with the ability to explain technical concepts clearly to non-technical partners across Sales, Marketing, Customer Experience, and Operations.</li>
<li>Comfort working in a remote, asynchronous environment, with a passion for using AI-native solutions to improve team productivity and the systems you build.</li>
</ul>
<p>About the team: The Salesforce Engineering Manager is part of the Enterprise Applications team, which is responsible for GitLab&#39;s critical business applications, including Salesforce, ServiceNow, Zuora, NetSuite, and more. This team helps GitLab scale by delivering new capabilities while maintaining a reliable, secure, and compliant production environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, Apex, Lightning Web Components, Visualforce, SOQL, Git-based source control, Continuous integration and continuous delivery (CI/CD), Release management, Integration platform as a service (iPaaS) tools, Automation solutions, AI-native solutions, DevOps practices, Cloud computing, Containerization, Microservices architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a company that provides an intelligent orchestration platform for DevSecOps. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8184975002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote, Bangalore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e772a5e2-9a4</externalid>
      <Title>Lead Software Engineer, API/SDK</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our rapidly growing team in Seattle, WA. In this role, you will work on our developer portal and generated SDKs to enable our partners to write complex technical integrations for the Lattice platform.</p>
<p>This position requires deep technical expertise in API design, cloud architecture, and hands-on development experience. If you thrive on solving complex technical challenges, enjoy creating great developer ecosystems, and are passionate about creating mission-critical solutions at scale, then this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work on our developer portal to enhance partner engagement and streamline the integration process</li>
<li>Develop infrastructure to simplify the exposure of APIs and SDKs for external developers</li>
<li>Build and maintain sample applications, SDKs, and technical frameworks that enable partners to implement sophisticated solutions</li>
<li>Provide technical leadership during partner onboarding, guiding their engineering teams through complex integration scenarios</li>
<li>Create proof-of-concept applications and reference architectures that demonstrate advanced Lattice capabilities and integration patterns</li>
<li>Collaborate with engineering teams to influence the platform roadmap based on real-world implementation challenges</li>
<li>Conduct technical reviews of partner architectures and provide recommendations for optimization and scalability</li>
<li>Troubleshoot complex integration issues and provide hands-on technical support for mission-critical deployments</li>
<li>Evangelize best practices for building resilient, secure, and performant applications on the Lattice platform</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience as a Senior Software Engineer with customer-facing responsibilities</li>
<li>Strong programming experience in multiple languages (Python, Java, Go, C++, or similar) with demonstrated ability to build production-grade applications</li>
<li>Deep expertise in distributed systems architecture, including microservices, event-driven architectures, and API gateway patterns</li>
<li>Experience with CI/CD pipelines, infrastructure as code, and DevOps practices</li>
<li>Hands-on experience with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)</li>
<li>Proven track record of designing and implementing complex system integrations in enterprise environments</li>
<li>Experience with API technologies including REST, gRPC, GraphQL, and real-time communication protocols (WebSockets, message queues)</li>
<li>Strong understanding of security patterns, authentication/authorization frameworks, and data protection in distributed systems</li>
<li>Excellent technical communication skills with the ability to present complex architectural concepts to both technical and non-technical stakeholders</li>
<li>Must be a U.S. Person due to required access to U.S. export-controlled information or facilities</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience architecting solutions for defence, aerospace, or other mission-critical industries</li>
<li>Background in edge computing, IoT architectures, or real-time data processing systems</li>
<li>Knowledge of air-gapped environments, offline-first architectures, and high-availability system design</li>
<li>Open source contributions to architectural frameworks or developer tools</li>
<li>Experience mentoring engineering teams and leading technical design reviews</li>
<li>Advanced degree in Computer Science, Engineering, or related technical field</li>
</ul>
<p>Salary Range: $191,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>API design, cloud architecture, hands-on development experience, distributed systems architecture, CI/CD pipelines, infrastructure as code, DevOps practices, cloud platforms, containerization technologies, complex system integrations, API technologies, security patterns, authentication/authorization frameworks, data protection, edge computing, IoT architectures, real-time data processing systems, air-gapped environments, offline-first architectures, high-availability system design, open source contributions, mentoring engineering teams, leading technical design reviews</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4754841007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e917812d-4c1</externalid>
      <Title>Senior Solutions Engineer, Okta</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>The Solutions Engineer Team</strong></p>
<p>We believe Solutions Engineers at Okta are involved in all stages of the customer&#39;s digital transformation. Solutions Engineers are experienced using presentations, email, phone and social media to connect with customers virtually and in-person. We are looking for great teammates who can build and deliver sales presentations and customized product demonstrations to help educate Okta&#39;s Customers (everyone from developers to product managers to C-level executives) on best practices during their cloud security technology journey.</p>
<p><strong>The Solutions Engineer Opportunity</strong></p>
<p>Reporting to the Senior Manager of Solutions Engineers Enterprise West, the Solutions Engineer is a functional business consultant, with a passion for technology and the advanced ability to develop, position and demonstrate product-specific solutions during sales cycles, while achieving quarterly and annual sales goals for an assigned territory.</p>
<p><strong>What you&#39;ll be doing</strong></p>
<ul>
<li>Work alongside the sales team as the technical and domain expert of a customer-facing sales team to help Customers understand the value of Okta&#39;s solutions</li>
<li>Understand customer challenges and business issues and provide product demonstrations to align our solutions with customer needs</li>
<li>Answer product feature and technical questions from customers, partners and Okta colleagues</li>
<li>Plan and deliver Proof Of Concepts for customers who have more complex use cases, collaborating with other Okta engineering teams as needed</li>
<li>Provide feedback to product management about product enhancements that can address customer needs and provide additional value</li>
<li>Share and learn best practices and re-usable assets with other Solutions Engineers to enhance the quality and efficiency of the team</li>
<li>Stay current on competitive analysis and market differentiation</li>
<li>Support marketing events including executive briefings, conferences, user groups, and trade shows</li>
<li>At times, be asked to learn and lead your team as a subject matter expert (SME) with various product solutions and/or competitive knowledge</li>
</ul>
<p><strong>What you&#39;ll bring to the role</strong></p>
<ul>
<li>8+ years pre-sales engineering experience and solution selling</li>
<li>A passion to serve the Customer, demonstrated in a customer-facing role, such as presales or professional services, but ideally in a pre-sales capacity</li>
<li>An ability to quickly communicate complex ideas around a technical topic, ideally on the fly, using a whiteboard</li>
<li>Deployed apps in cloud platforms: AWS, Azure, GCP, Vercel, etc.</li>
<li>Experience with at least one standard network security protocol. (eg. OAuth, OAuth2, SAML, LDAP)</li>
<li>Experience working with REST APIs and SDKs</li>
<li>Hands-on experience in one or more of the following areas: web (JavaScript, HTML, frontend frameworks) development, backend (Java, C#, Node.js, Python) development, scripting (Bash, Powershell, Perl)</li>
<li>An understanding of core security concerns within a typical application. (password hashing, SSL/TLS, encryption at rest, XSS, XSRF)</li>
<li>You&#39;re an exceptional communicator, adept at simplifying complex technical concepts for diverse audiences, including highly skilled experts in large group settings</li>
<li>A confident and articulate presenter, you excel at sharing knowledge and insights with experienced technical professionals and large expert audiences</li>
<li>Demonstrated ability to effectively communicate complex technical information to highly skilled and experienced audiences, whether one-on-one or in large presentations</li>
<li>Territory management skills, including pipeline building and working with Sales counterparts to guide execution excellence</li>
<li>Diagramming experience for user journey flows, complex architecture diagrams, etc.</li>
<li>Typically 25% travel</li>
<li>Bachelor&#39;s degree in Engineering, Computer Science, MIS or a comparable field is preferred</li>
</ul>
<p><strong>You ideally have</strong></p>
<ul>
<li>Proven experience designing, implementing, and managing Privileged Access Management (PAM) solutions in enterprise environments</li>
<li>Deep understanding of the challenges and best practices associated with securing privileged accounts, including just-in-time (JIT) access, session recording, credential vaulting, and secrets management</li>
<li>Practical experience managing Windows Server, Active Directory, LDAP, and Federation services</li>
<li>Proficiency in UNIX/Linux, database security, network security, scripting, and DevOps practices preferred</li>
<li>Relevant experience with enterprise applications, security management, systems management, identity management, and/or policy management solutions, particularly in the areas of Identity and Privileged Access</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$215,000-$323,000 USD</Salaryrange>
      <Skills>pre-sales engineering, solution selling, cloud platforms, network security protocols, REST APIs, SDKs, web development, backend development, scripting, security concerns, communication, presentation, territory management, diagramming, Privileged Access Management, PAM, Windows Server, Active Directory, LDAP, Federation services, UNIX/Linux, database security, network security, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions for organizations.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7553244?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue, Washington; Oregon; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b34dfe7b-d84</externalid>
      <Title>Senior Software Engineer - Backend</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer - Backend to join our team in Vancouver. As a Senior Software Engineer, you will be responsible for designing, developing, and maintaining large-scale distributed systems. You will work on a variety of projects, including Log Analytics, AI/BI, Unity Catalog Business Semantics, and Databricks Apps.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and develop large-scale distributed systems using Java, Scala, or C++</li>
<li>Develop and maintain high-quality code that meets the requirements of the project</li>
<li>Collaborate with cross-functional teams to identify and prioritize project requirements</li>
<li>Troubleshoot and resolve complex technical issues</li>
<li>Stay up-to-date with industry trends and emerging technologies</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Strong foundation in algorithms and data structures</li>
<li>Experience with cloud technologies, such as AWS, Azure, or GCP</li>
<li>Experience with security and systems that handle sensitive data</li>
<li>Good knowledge of SQL</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>Experience with big data technologies, such as Hadoop or Spark</li>
<li>Experience with containerization, such as Docker</li>
<li>Experience with DevOps practices, such as continuous integration and delivery</li>
</ul>
<p>Pay Range Transparency The pay range for this role is $146,200-$201,100 CAD per year, depending on experience and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>Java, Scala, C++, Cloud technologies, Security, SQL, Big data technologies, Containerization, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8093295002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80b94e35-0f3</externalid>
      <Title>Staff Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Staff Technical Solutions Engineer with over 12+ years of experience to join our Platform Support team. This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 12 years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimisation.</li>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7845334002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cfd72a5a-23a</externalid>
      <Title>Senior Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed Engineer, you will sit at the intersection of engineering and customer delivery. You will work directly with state governments and other public sector partners to design, build, and deploy solutions that solve real-world identity challenges.</p>
<p>This role combines hands-on software development with consulting and customer success. You will represent SpruceID on the ground with our partners, ensuring our technology is deployed effectively and with lasting impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead the design and development of products and solutions for state governments and enterprise customers</li>
<li>Work side-by-side with customer delivery leads, engineers, and UX designers to ensure successful projects delivery and deployments</li>
<li>Translate customer requirements into technical architectures and working implementations</li>
<li>Act as a trusted technical advisor to public sector partners, guiding them through standards adoption and best practices</li>
<li>Build backend software and full-stack web and mobile applications that meet public sector security, privacy, and accessibility standards</li>
<li>Contribute to new and existing Rust codebases that run on backend services, mobile devices, and in the browser</li>
<li>Manage customer deployments and support operations</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of experience shipping backend software in statically typed languages, such as C#, Rust, Go, or Java.</li>
<li>Experience shipping modern web frontends that meet accessibility and security standards</li>
<li>Proven ability to lead cross-functional engineering efforts and deliver production systems</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and devops practices</li>
<li>Excellent communication skills and experience working directly with customers, ideally in a consulting or delivery role</li>
<li>Based in the US and excited to engage directly with state government partners</li>
</ul>
<p><strong>Bonus Qualifications</strong></p>
<ul>
<li>Strong foundation in digital identity, cryptography, data privacy, or blockchain</li>
<li>Prior experience working with public sector software projects</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend software development, statically typed languages, cloud infrastructure, devops practices, Rust codebases, full-stack web and mobile applications, public sector security and privacy standards, digital identity, cryptography, data privacy, blockchain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>SpruceID</Employername>
      <Employerlogo>https://logos.yubhub.co/spruceid.com.png</Employerlogo>
      <Employerdescription>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions.</Employerdescription>
      <Employerwebsite>https://spruceid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/sprucesystems/2f0a0482-c531-4466-a2a3-f795b83b0626?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c3536285-729</externalid>
      <Title>Senior Full-Stack Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Full-Stack Engineer to join our forward-deployed engineering team. You&#39;ll work directly with state governments and public sector partners, and enterprise clients to design, build, and deploy impactful identity solutions.</p>
<p>This role blends hands-on software development, technical consulting, and customer success: ideal for someone who thrives at the intersection of technology and mission-driven impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and deploy full-stack solutions for state governments and public sector partners.</li>
<li>Collaborate with customer delivery leads, engineers, and UX designers to ensure successful deployments.</li>
<li>Translate customer requirements into technical architectures and production-ready systems.</li>
<li>Serve as a trusted technical advisor for partners adopting open identity standards and privacy best practices.</li>
<li>Build backend services and full-stack web or mobile apps that meet public sector security, privacy, and accessibility standards.</li>
<li>Contribute to Rust codebases that run across backend, mobile, and browser environments.</li>
<li>Manage customer deployments and provide post-launch technical support.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>2+ years of experience building backend systems in statically typed languages (Rust, Go, C#, or Java).</li>
<li>Strong background in modern web frontends (React, TypeScript, or similar) with an eye for accessibility and security.</li>
<li>Proven ability to lead cross-functional engineering efforts and deliver production-grade systems.</li>
<li>Strong appreciation for open-source software, standards-based design, and community-driven development.</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and DevOps practices.</li>
<li>Excellent communication skills and comfort working directly with customers or stakeholders.</li>
<li>Based in the U.S., excited to collaborate with state government partners.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience with digital identity, cryptography, data privacy, or blockchain technologies (e.g., Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect).</li>
<li>Familiarity with PostgreSQL, GraphQL, or RESTful API design and development.</li>
<li>Understanding of CI/CD pipelines, infrastructure as code, and automation using Terraform, or similar tools.</li>
<li>Exposure to mobile app development (React Native, Flutter, or similar frameworks).</li>
<li>Experience in security engineering, access control, federated identity, or PKI systems.</li>
<li>Prior work in public sector, government technology, or other high-compliance environments.</li>
<li>Interest in usability, accessibility (WCAG, Section 508), and inclusive product design.</li>
<li>Contributions to open-source projects or participation in digital identity standards bodies (W3C, DIF, IETF) is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C#, Java, React, TypeScript, Cloud infrastructure, DevOps practices, PostgreSQL, GraphQL, RESTful API design, CI/CD pipelines, Infrastructure as code, Automation, Terraform, Mobile app development, Security engineering, Access control, Federated identity, PKI systems, Digital identity, Cryptography, Data privacy, Blockchain technologies, Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect, Usability, Accessibility, Inclusive product design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>SpruceID</Employername>
      <Employerlogo>https://logos.yubhub.co/spruceid.com.png</Employerlogo>
      <Employerdescription>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions for governments and enterprises.</Employerdescription>
      <Employerwebsite>https://spruceid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/sprucesystems/b6ed1d39-d3e4-454f-8d8c-a5a65d64651f?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e5ecff17-84f</externalid>
      <Title>Senior Forward Deployed Engineer (AI Agent) - UK</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionise the workforce with AI.</p>
<p>The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop, configure, deploy, and optimise AI agents using Cresta’s AI platform and tools.</li>
<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>
<li>Optimise AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>
<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>
<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>
<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>
<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximise customer ROI.</li>
<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
<li>3+ years of experience of full time working experience in software development/ consulting, AI/ML engineering, or system integration, or as FDE.</li>
<li>Proficiency in Python and Golang, with the ability to write clean, efficient code.</li>
<li>Familiarity with AI/ML concepts. Hands-on experience with large language models (LLMs), and prompt engineering techniques are strongly preferred.</li>
<li>Strong understanding of general AI agent frameworks, function calling, and retrieval-augmented generation (RAG). Hands-on experience of building such a system is strongly preferred.</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerisation, monitoring).</li>
<li>Hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</li>
<li>Excellent communication and project management skills.</li>
<li>Ability to use data-driven decision-making, including A/B testing and performance monitoring, to refine solutions.</li>
<li>You thrive in cross-functional environments, working hand-in-hand with PMs and engineers to turn real customer problems into scalable AI solutions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Golang, AI/ML, Large Language Models (LLMs), Prompt engineering, General AI agent frameworks, Function calling, Retrieval-augmented generation (RAG), Cloud platforms, DevOps practices, APIs, Webhooks, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a technology company that specialises in artificial intelligence (AI) and machine learning (ML) solutions for contact centres.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5097513008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United Kingdom (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>39c59172-1f6</externalid>
      <Title>Senior Forward Deployed Engineer (AI Agent) - Germany</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI.
The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop, configure, deploy, and optimize AI agents using Cresta’s AI platform and tools.</li>
<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>
<li>Optimize AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>
<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>
<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>
<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>
<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximize customer ROI.</li>
<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
<li>3+ years of experience of full time working experience in software development/ consulting, AI/ML engineering, or system integration, or as FDE</li>
<li>Proficiency in Python and Golang, with the ability to write clean, efficient code.</li>
<li>Familiarity with AI/ML concepts. Hands-on experience with large language models (LLMs), and prompt engineering techniques are strongly preferred.</li>
<li>Strong understanding of general AI agent frameworks, function calling, and retrieval-augmented generation (RAG). Hands-on experience of building such a system is strongly preferred.</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerization, monitoring).</li>
<li>Hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</li>
<li>Excellent communication and project management skills.</li>
<li>Ability to use data-driven decision-making, including A/B testing and performance monitoring, to refine solutions.</li>
<li>You thrive in cross-functional environments, working hand-in-hand with PMs and engineers to turn real customer problems into scalable AI solutions.</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Golang, Large Language Models (LLMs), AI/ML concepts, AI agent frameworks, function calling, retrieval-augmented generation (RAG), cloud platforms, DevOps practices, APIs, webhooks, data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta develops a platform that combines AI and human intelligence to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5137369008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Berlin, Germany (Hybird)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>62f166c0-970</externalid>
      <Title>Senior Forward Deployed Engineer (AI Agent)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI.</p>
<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>
<p>This position requires strong engineering skills, adaptability, and customer engagement. If you are self-driven, analytical, and eager to leverage AI in practical applications, this role is for you.</p>
<p>Our team is looking for someone with a strong background in software development, AI/ML engineering, or forward deployed engineering. You should have experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerization, monitoring). Additionally, you should have hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</p>
<p>In this role, you will:</p>
<ul>
<li>Develop, configure, deploy, and optimize AI agents using Cresta’s AI platform and tools.</li>
<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>
<li>Optimize AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>
<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>
<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>
<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>
<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximize customer ROI.</li>
<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>
</ul>
<p>We offer a comprehensive and people-first benefits package to support you at work and in life:</p>
<ul>
<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life</li>
<li>Flexible vacation time to promote a healthy work-life blend</li>
<li>Paid parental leave to support you and your family</li>
</ul>
<p>Cresta’s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</p>
<p>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Golang, AI/ML, Large Language Models, AI Agent systems, Cloud platforms, DevOps practices, APIs, webhooks, data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a technology company that specializes in AI-powered contact center solutions.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5107283008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>16154a5a-9a0</externalid>
      <Title>Product Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Product Engineer on the Growth team at Cursor, you&#39;ll design and implement systems and features related to Onboarding, Monetization, and the core product experience.</p>
<p><strong>You may be a fit if</strong></p>
<p>You have an entrepreneurial spirit and love creating outsized business impact. You want to be at the frontier of AI transformation with the best companies in the world. You&#39;re passionate about building great products that blend excellent engineering with a taste for models and design. You have a propensity for creative ideas and have a knack for making powerful tools without compromising their ease-of-use.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement systems and features related to Onboarding, Monetization, and the core product experience.</li>
</ul>
<ul>
<li>Work closely with the Growth team to develop and maintain high-quality products.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to ensure seamless product delivery.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary: £80,000 - £100,000 per annum.</li>
</ul>
<ul>
<li>Opportunity to work with a leading technology organisation.</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment.</li>
</ul>
<ul>
<li>Flexible working hours and remote work options.</li>
</ul>
<ul>
<li>Access to cutting-edge technology and tools.</li>
</ul>
<ul>
<li>Professional development and growth opportunities.</li>
</ul>
<p><strong>Skills</strong></p>
<ul>
<li>Strong understanding of software development principles and practices.</li>
</ul>
<ul>
<li>Experience with cloud-based technologies, such as AWS or Google Cloud.</li>
</ul>
<ul>
<li>Proficiency in programming languages, such as Python or Java.</li>
</ul>
<ul>
<li>Knowledge of data structures and algorithms.</li>
</ul>
<ul>
<li>Experience with agile development methodologies.</li>
</ul>
<ul>
<li>Strong communication and collaboration skills.</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Experience with machine learning and AI technologies.</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools.</li>
</ul>
<ul>
<li>Experience with containerisation and orchestration.</li>
</ul>
<ul>
<li>Familiarity with cloud-based databases and data storage solutions.</li>
</ul>
<ul>
<li>Experience with CI/CD pipelines and automation tools.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£80,000 - £100,000 per annum</Salaryrange>
      <Skills>software development, cloud-based technologies, programming languages, data structures, agile development methodologies, machine learning, AI technologies, DevOps practices, containerisation, cloud-based databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology organisation that specialises in AI transformation. It is a company that works with leading businesses to implement AI solutions.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-growth?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>82a0bb5c-fd2</externalid>
      <Title>Software Engineer, Identity Infrastructure Engineering</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Identity Infrastructure Engineering</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco; New York City; Remote - US; Seattle</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>IT</p>
<p><strong>Compensation</strong></p>
<ul>
<li>San Francisco, Seattle or New York City $230K – $385K • Offers Equity</li>
<li>Zone A $207K – $346.5K • Offers Equity</li>
<li>Zone B $184K – $308K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
<li>401(k) retirement plan with employer match</li>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
<li>Mental health and wellness support</li>
<li>Employer-paid basic life and disability coverage</li>
<li>Annual learning and development stipend to fuel your professional growth</li>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
<li>Relocation support for eligible employees</li>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. The Identity Infrastructure Engineering team sits at the core of this effort, designing and building the identity and access management solutions that protect our model weights, customer data, and critical systems across multiple cloud environments. We partner with teams across OpenAI—Applied Engineering, Research, IT, and Security—to provide a secure and scalable platform for permissioning, orchestration, and innovative AI research.</p>
<p><strong>About the Role</strong></p>
<p>As a Software Engineer on the Identity Infrastructure Engineering team, you’ll be instrumental in creating, deploying, and operating foundational security tools and infrastructure. You will work with a broad range of technologies to support multi-cloud deployments, ensuring that researchers and engineers can safely build, test, and scale transformative AI systems. The role requires a balance of strong technical depth, cross-functional collaboration, and a passion for embedding secure-by-default principles into every layer of our stack.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build new features for our IAM platform that seamlessly integrate with evolving cloud services, enabling teams to work efficiently while adhering to security best practices.</li>
<li>Drive security innovation by designing tools, processes, and architectures that protect data at scale and reinforce a secure development culture across the organization.</li>
<li>Collaborate cross-functionally with researchers, engineers, and compliance teams to address security requirements for multi-cloud deployments, large-scale model training, and emerging AI use cases.</li>
<li>Implement and refine access policies that strike the right balance between enabling rapid experimentation and protecting high-value assets, including model weights and customer data.</li>
<li>Troubleshoot complex identity or access issues across distributed systems, ensuring minimal downtime and a safe environment for AI research and product teams.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>A background in building secure systems—from core IAM services to orchestration layers that manage credentials, roles, or policies at scale.</li>
<li>Proficiency in programming languages such as Python, Go, or similar, with a track record of writing high-quality, maintainable code.</li>
<li>Experience with modern cloud infrastructure (AWS, Azure, GCP) and familiarity with industry-standard security protocols (OAuth, SAML, OpenID Connect) and authentication/authorization patterns.</li>
<li>A security-focused mindset, with knowledge of threat modeling, risk assessment, and the ability to embed security features throughout the software development lifecycle.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with containerization (Docker, Kubernetes) and orchestration tools (e.g., Terraform, Ansible).</li>
<li>Familiarity with CI/CD pipelines and automated testing frameworks.</li>
<li>Knowledge of machine learning and AI concepts, including model training, deployment, and security.</li>
<li>Experience with cloud security services (e.g., AWS IAM, Azure Active Directory).</li>
<li>Familiarity with DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<p><strong>What You’ll Get</strong></p>
<ul>
<li>Competitive salary and equity package</li>
<li>Comprehensive benefits package, including medical, dental, and vision insurance</li>
<li>401(k) retirement plan with employer match</li>
<li>Paid parental leave and medical/caregiver leave</li>
<li>Flexible PTO and paid holidays</li>
<li>Professional development opportunities</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you’re passionate about building secure systems and contributing to the development of cutting-edge AI technology, we encourage you to apply for this exciting opportunity. Please submit your resume, cover letter, and any relevant work samples or projects you’d like to share. We can’t wait to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>Python, Go, AWS, Azure, GCP, OAuth, SAML, OpenID Connect, containerization, Docker, Kubernetes, Terraform, Ansible, CI/CD pipelines, automated testing frameworks, machine learning, AI concepts, model training, deployment, security, cloud security services, AWS IAM, Azure Active Directory, DevOps practices, Jenkins, GitLab, experience with containerization (Docker, Kubernetes) and orchestration tools (e.g., Terraform, Ansible), familiarity with CI/CD pipelines and automated testing frameworks, knowledge of machine learning and AI concepts, including model training, deployment, and security, experience with cloud security services (e.g., AWS IAM, Azure Active Directory), familiarity with DevOps practices and tools (e.g., Jenkins, GitLab)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing artificial intelligence (AI) systems. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/551b0d0d-46c2-42fb-bb05-46e2fba8d4db?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco; New York City; Remote - US; Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>fb4acb2b-bab</externalid>
      <Title>Security Reliability Engineering, Lead</Title>
      <Description><![CDATA[<p><strong>Security Reliability Engineering, Lead</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $385K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Infrastructure Engineering function sits within IT and is responsible for reliably building, deploying, and operating critical on prem and hybrid environments that power internal services and critical R&amp;D environments.</p>
<p>This is a new, bootstrap team focused on applying strong Site Reliability Engineering discipline to environments where uptime, safety, recoverability, and security are non-negotiable. The team replaces bespoke, one off infrastructure with standardized infrastructure-as-code building blocks that compound reliability and operational leverage as OpenAI scales.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Security Reliability Engineering Lead to design, build, and operate reliable, secure, and scalable infrastructure that underpins identity, access, endpoint, and shared platform services across the company.</p>
<p>In this role, you will own infrastructure and identity systems end to end, from foundational design and provisioning through policy enforcement, upgrades, recovery, and day two operations. You will establish durable, production grade platforms that remove operational friction, enforce security by default, and enable teams to move faster with confidence.</p>
<p>This role is well suited for a senior engineer who thrives in ambiguity, enjoys owning complex systems end to end, and raises the reliability and security bar by replacing fragile implementations with standardized, repeatable infrastructure.</p>
<p>This role is based in our San Francisco HQ and requires in-office presence.</p>
<p><strong>In this role, you will:</strong></p>
<p><strong>Set direction and establish strong foundations</strong></p>
<ul>
<li>Define and evolve infrastructure patterns for on prem and hybrid environments, including self hosted platforms, vendor supported systems, and lab environments.</li>
</ul>
<ul>
<li>Establish standardized, production grade deployment and operational models that replace bespoke implementations.</li>
</ul>
<ul>
<li>Partner with IT, Security, Identity, and Network teams to ensure infrastructure meets reliability, security, and access requirements by design.</li>
</ul>
<ul>
<li>Design and mature the production architecture for IAM adjacent platforms such as Microsoft Entra using SRE principles.</li>
</ul>
<ul>
<li>Establish common management rules and shared resources within Azure subscriptions to ensure consistent, policy aligned operations.</li>
</ul>
<p><strong>Build, operate, and scale reliably</strong></p>
<ul>
<li>Own the full lifecycle of infrastructure systems, including deployment, upgrades, patching, recovery, and ongoing operations.</li>
</ul>
<ul>
<li>Operate and harden shared infrastructure provisioned through Infra Terraform, ensuring repeatability, auditability, and safe change management.</li>
</ul>
<ul>
<li>Design and implement infrastructure as code and configuration management to support shared services, identity adjacent systems, and endpoint platforms using tools like Chef, Ansible and Terraform.</li>
</ul>
<ul>
<li>Build and operate monitoring, alerting, and incident response mechanisms to meet high availability and recoverability targets.</li>
</ul>
<ul>
<li>Lead incident response and postmortems across infrastructure, identity adjacent platforms, and fleet systems, driving durable fixes and shared learning.</li>
</ul>
<ul>
<li>Build and operate containerized and platform services, including Kubernetes and Docker-based workloads, using DevOps practices that emphasize reliability, repeatability, and safe change management.</li>
</ul>
<ul>
<li>Use Git-based workflows as the source of truth for infrastructure and policy changes, enabling review, auditability, and safe, reversible automation.</li>
</ul>
<p><strong>Automate for leverage and safety</strong></p>
<ul>
<li>Identify high leverage automation opportunities that eliminate manual toil and reduce operational risk across infrastructure and access related systems.</li>
</ul>
<ul>
<li>Implement guardrails, safety mechanisms, and progressive rollout patterns for infrastructure and policy enforcement changes.</li>
</ul>
<ul>
<li>Ensure automation is safe, observable, and resilient under failure conditions, particularly for shared services and high blast radius systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$293K – $385K</Salaryrange>
      <Skills>Security Reliability Engineering, Infrastructure as Code, Cloud Computing, Containerization, DevOps, Git, Terraform, Ansible, Chef, Kubernetes, Docker, Microsoft Entra, Azure, Identity and Access Management, Endpoint Security, Platform Services, Site Reliability Engineering, Cloud Security, Container Orchestration, Infrastructure Automation, Monitoring and Alerting, Incident Response, Postmortem Analysis, DevOps Practices, Cloud-Native Applications, Microservices Architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/645ccd65-eb60-4eb7-b094-b01c2269638c?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>deca2d46-8fe</externalid>
      <Title>Software Engineer, Full Stack, Revenue Platform</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Full Stack, Revenue Platform</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Revenue Platform sits at the intersection of customer experience, financial precision, and enterprise-grade reliability. We build both end-user experiences and the underlying platform capabilities that power invoicing, billing, payments, and revenue recognition across OpenAI. Our work spans high-leverage customer surfaces and deep, reusable platform primitives used by multiple teams. These are foundational systems that will support OpenAI’s growth for years to come, and we’re looking for engineers who care deeply about craftsmanship, correctness, and building platforms and experiences that scale gracefully and are a joy to build on.</p>
<p><strong>About the Role</strong></p>
<p>As a Full Stack Engineer on the Revenue Platform team, you will design, build, and operate platform services and user-facing interfaces that form the backbone of OpenAI’s commercial engine. You’ll collaborate with product, design, finance, and engineering partners to deliver intuitive customer experiences while also creating shared foundations that other teams can safely and efficiently build on. Your work will directly shape the reliability, scalability, and trustworthiness of OpenAI’s most critical financial workflows.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve shared full-stack platform components including APIs, data models, services, and UI primitives that power billing, subscriptions, usage-based pricing, and enterprise entitlements across OpenAI.</li>
</ul>
<ul>
<li>Design scalable, reusable revenue workflows and abstractions that other product teams can compose to launch new offerings without reinventing core billing logic.</li>
</ul>
<ul>
<li>Partner closely with product, frontend, and backend engineers to deliver end-to-end revenue capabilities, ensuring platform components are intuitive to adopt and safe to extend.</li>
</ul>
<ul>
<li>Develop internal platforms and tools used by Finance, Accounting, Sales, Support, and Go-To-Market teams to manage, audit, and reason about revenue data efficiently.</li>
</ul>
<ul>
<li>Build automation and AI-powered capabilities within the Revenue Platform to reduce manual work, surface insights, and improve operational decision-making.</li>
</ul>
<ul>
<li>Help define the architecture, standards, and contracts for a shared revenue platform, balancing flexibility for product teams with correctness, reliability, and compliance.</li>
</ul>
<ul>
<li>Collaborate cross-functionally to translate ambiguous commercial, financial, and operational requirements into durable platform primitives that scale with OpenAI’s products and customer base.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience building full-stack web applications with strong fundamentals across frontend, backend, and API design.</li>
</ul>
<ul>
<li>Proficiency with modern frontend frameworks (e.g., React, TypeScript) and backend technologies (Python preferred; Node, Go, or similar also welcome).</li>
</ul>
<ul>
<li>Experience designing and implementing scalable, reusable platform components and revenue workflows.</li>
</ul>
<ul>
<li>Strong understanding of financial and commercial concepts, including billing, subscriptions, and revenue recognition.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.</li>
</ul>
<ul>
<li>Strong problem-solving skills, with the ability to analyze complex technical and business problems and develop effective solutions.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with cloud-based platforms (e.g., AWS, GCP) and containerization (e.g., Docker).</li>
</ul>
<ul>
<li>Familiarity with DevOps practices and tools (e.g., CI/CD pipelines, monitoring, logging).</li>
</ul>
<ul>
<li>Experience with data modeling and database design.</li>
</ul>
<ul>
<li>Knowledge of machine learning and AI concepts, including natural language processing and computer vision.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
</ul>
<ul>
<li>Opportunity to work with a talented and diverse team of engineers and product managers.</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment.</li>
</ul>
<ul>
<li>Professional development opportunities, including training and mentorship.</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<ul>
<li>Access to cutting-edge technology and tools.</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and talented engineer who is passionate about building scalable and reliable platform components, we encourage you to apply for this role. Please submit your resume and a cover letter that outlines your experience and qualifications for the position. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>full-stack web applications, frontend frameworks, backend technologies, API design, scalable platform components, revenue workflows, financial and commercial concepts, billing, subscriptions, revenue recognition, cloud-based platforms, containerization, DevOps practices, data modeling, database design, machine learning, AI concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that aims to ensure that artificial general intelligence benefits all of humanity. It was founded in 2015 and has since grown to become a leading player in the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/8427b270-8440-400c-bc18-ff24c4f0f987?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ec5cf5d1-bbe</externalid>
      <Title>Initiativbewerbung (m/w/d) Mainz-Kastel</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>You&#39;ll be part of a greener, safer, and better world of mobility. As a member of our team, you&#39;ll contribute to the development of innovative solutions for the automotive industry and other sectors.</p>
<p><strong>What you need</strong></p>
<p>To succeed in this role, you&#39;ll need to be a team player with a passion for mobility technology. You&#39;ll have a strong background in engineering, software development, or a related field, and be able to communicate effectively with colleagues and customers.</p>
<p><strong>Why this matters</strong></p>
<p>At AVL, we&#39;re committed to creating a better world of mobility. We believe that by working together, we can make a positive impact on the environment, society, and the economy. If you share our vision and values, we&#39;d love to hear from you.</p>
<p><strong>Job Description</strong></p>
<p>As a member of our team, you&#39;ll be responsible for developing and implementing innovative solutions for the automotive industry and other sectors. You&#39;ll work closely with colleagues and customers to understand their needs and develop solutions that meet their requirements.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop and implement innovative solutions for the automotive industry and other sectors</li>
<li>Collaborate with colleagues and customers to understand their needs and develop solutions that meet their requirements</li>
<li>Communicate effectively with colleagues and customers to ensure successful project delivery</li>
<li>Stay up-to-date with the latest developments in mobility technology and apply this knowledge to improve our solutions</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in engineering, software development, or a related field</li>
<li>Strong background in mobility technology, software development, or a related field</li>
<li>Excellent communication and teamwork skills</li>
<li>Ability to work in a fast-paced environment and meet deadlines</li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with agile development methodologies</li>
<li>Knowledge of cloud-based development tools and platforms</li>
<li>Familiarity with DevOps practices and tools</li>
</ul>
<p><strong>Why AVL</strong></p>
<p>At AVL, we&#39;re committed to creating a better world of mobility. We believe that by working together, we can make a positive impact on the environment, society, and the economy. If you share our vision and values, we&#39;d love to hear from you.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Flexible working hours and remote work options</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this opportunity, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>mobility technology, software development, agile development methodologies, cloud-based development tools, DevOps practices, agile development methodologies, cloud-based development tools, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>AVL Deutschland GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL Deutschland GmbH is a leading mobility technology company for development, simulation, and testing in the automotive industry and other sectors. They provide concepts, solutions, and methods in areas such as vehicle development and integration, e-mobility, driver assistance systems, and autonomous driving (ADAS/AD) and software for a greener, safer, and better world of mobility.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Mainz-Kastel-Initiativbewerbung-%28mwd%29-Mainz-Kastel/744434301/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Mainz-Kastel</Location>
      <Country></Country>
      <Postedate>2025-12-19</Postedate>
    </job>
    <job>
      <externalid>e330a898-308</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>
<ul>
<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>
<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>
<li>You will implement data models aligned with KPI monitoring requirements.</li>
<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>
<li>You will implement audit and logging mechanisms for sensitive data.</li>
<li>You will document data flows, architecture, and operational procedures.</li>
<li>You will collaborate with international project teams</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>
<li>You have 3+ years of proven experience in data engineering projects.</li>
<li>You have strong skills in Python, SQL, and PySpark.</li>
<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>
<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>
<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>
<li>You have a good understanding of data security and compliance for sensitive information.</li>
<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>
<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, AWS services, data modeling, KPI reporting, data security, DevOps practices, SAP HANA, Java, IoT in the automotive domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions at its two locations in Cluj-Napoca and Timisoara, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18980&amp;utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Timisoara</Location>
      <Country></Country>
      <Postedate>2025-12-08</Postedate>
    </job>
    <job>
      <externalid>a0ca0eaa-e37</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>
<ul>
<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>
<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>
<li>You will implement data models aligned with KPI monitoring requirements.</li>
<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>
<li>You will implement audit and logging mechanisms for sensitive data.</li>
<li>You will document data flows, architecture, and operational procedures.</li>
<li>You will collaborate with international project teams</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>
<li>You have 3+ years of proven experience in data engineering projects.</li>
<li>You have strong skills in Python, SQL, and PySpark.</li>
<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>
<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>
<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>
<li>You have a good understanding of data security and compliance for sensitive information.</li>
<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>
<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, data modeling, KPI reporting, AWS services, monitoring frameworks, data security, DevOps practices, SAP HANA, Java, IoT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions at its two locations in Cluj-Napoca and Timisoara, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18979&amp;utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Cluj</Location>
      <Country></Country>
      <Postedate>2025-12-08</Postedate>
    </job>
  </jobs>
</source>