<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>db261609-388</externalid>
      <Title>Principal Data &amp; Ontology Architect - AI Enablement</Title>
      <Description><![CDATA[<p>We are looking for a Principal Data &amp; Ontology Architect to support the implementation and adoption of data and ontology enablement practices and standards within Control Tower Operations to support scalable, governed, and business-aligned AI initiatives.</p>
<p>The successful candidate will serve as the primary bridge between Business Units, Global IT, and Control Tower Operations, ensuring shared understanding of data practices, workflows, and requirements. They will apply established standards for semantic modeling, domain alignment, concept reuse, and ontology lifecycle management.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Supporting the implementation and ongoing maintenance of ontology enablement practices and operating model strategy to support AI, analytics, and digital initiatives across multiple Business Units</li>
<li>Applying established standards for semantic modeling, domain alignment, concept reuse, and ontology lifecycle management</li>
<li>Serving as the enterprise subject-matter authority for ontology-related topics, providing recommendations and guidance to governance and leadership forums</li>
<li>Collaborating with Global IT and enterprise data architecture to ensure ontology practices align with enterprise data platforms and Control Tower operational processes</li>
</ul>
<ul>
<li>Partnering with Business Units to understand domain concepts, terminology, operational data, and AI use cases, translating them into ontology-aligned data structures</li>
<li>Guiding Business Units in contributing domain models, metadata, and data assets into the enterprise ontology using defined governance and intake processes</li>
<li>Enabling repeatable onboarding of Business Unit data into AI initiatives, reducing reliance on ad-hoc IT engagement and minimizing duplicated effort</li>
</ul>
<ul>
<li>Serving as a liaison between Business Units and Global IT for AI data and ontology-related matters</li>
<li>Engaging with Global IT teams to understand enterprise data platforms, workflows, standards, and operational constraints</li>
<li>Translating Global IT practices, requirements, and workflows into clear, actionable guidance for Business Unit data stewards</li>
</ul>
<ul>
<li>Educating, guiding, and supporting Business Unit data stewards on their roles in data governance, ontology contribution, and AI data enablement</li>
<li>Supporting the development and documentation of workflows, expectations, and operating models for how BU data stewards engage with the Control Tower and Global IT</li>
</ul>
<ul>
<li>Ensuring Business Unit Data Stewards understand how to prepare, govern, and submit data assets for ontology integration and AI use</li>
<li>Promoting consistent adoption of governance, quality, and semantic standards across Business Units</li>
</ul>
<ul>
<li>Supporting integration of data and ontology enablement into Control Tower workflows</li>
<li>Providing operational insight into data readiness, semantic risks, and governance gaps to inform Control Tower decision-making</li>
<li>Identifying systemic issues and contributing recommendations to drive continuous improvement of data enablement processes</li>
</ul>
<ul>
<li>Ensuring semantic integrity, data quality, lineage, and consistency are maintained as data assets flow into AI solutions</li>
<li>Identifying systemic issues and recommending continuous improvement opportunities to Control Tower Operations leadership</li>
<li>Influencing corrective actions, tooling investments, or governance updates to mitigate long-term risk</li>
</ul>
<p>This role requires a minimum of 10 years of relevant work experience in data architecture, data governance, ontology development, semantic modeling, or related disciplines, supporting cross-functional initiatives spanning multiple business units and IT organizations.</p>
<p>The ideal candidate will have in-depth expertise in ontology design, semantic modeling, and domain-driven data architecture, as well as experience contributing to the development and implementation of data and ontology strategies. They will also have demonstrated experience serving as a bridge between business stakeholders and IT organizations, with a strong ability to translate technical platforms, workflows, and constraints into business-understandable guidance.</p>
<p>A Bachelor&#39;s level degree or diploma in Computer Science, Data Science/Engineering, Applied Mathematics/Statistics, Electronics/Electrical, Information Technology/Information Sciences, or a related field of study is required. A Master&#39;s or Ph.D. degree is preferred.</p>
<p>The successful candidate will be comfortable operating in ambiguous, evolving environments with enterprise-level impact, and will have a systems-thinking mindset with understanding of AI, analytics, and enterprise data platforms.</p>
<p>Highly desirable skills include proficiency in OWL (Web Ontology Language), RDF/RDFS – graph-based data model, storage in graph databases such as Neo4j or Amazon Neptune, and querying using SPARQL for RDF-based ontologies.</p>
<p>This is an onsite job based at our ADC, Raymond, OH office. One telecommuting workday per week may be possible with prior departmental approval.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$120,400.00 - $150,500.00</Salaryrange>
      <Skills>ontology design, semantic modeling, domain-driven data architecture, data governance, AI data enablement, data quality, lineage, consistency, OWL (Web Ontology Language), RDF/RDFS – graph-based data model, graph databases, Neo4j, Amazon Neptune, SPARQL, ontology development, data architecture, data science, electronics, electrical, information technology, information sciences</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Honda</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.honda.com.png</Employerlogo>
      <Employerdescription>Honda is a multinational Japanese conglomerate that produces automobiles, motorcycles, and power equipment. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://careers.honda.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.honda.com/us/en/job/10812/Principal-Data-Ontology-Architect-AI-Enablement</Applyto>
      <Location>Raymond</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>54f58a4d-707</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>As a Senior Data Scientist at Formation Bio, you will be at the forefront of revolutionizing drug development through AI and advanced analytics. In this role, you&#39;ll lead crucial initiatives that directly impact our drug development portfolio, from developing sophisticated models for patient selection to creating AI-powered solutions for clinical trial optimization.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and execute complex data science projects that directly advance our drug development portfolio</li>
<li>Develop and implement sophisticated models for therapeutic hypothesis evaluation, including patient stratification and biomarker identification</li>
<li>Design and create AI models for modernizing clinical trial evaluations, including surrogate endpoints</li>
<li>Aid in the development and training of AI agents to automate and optimize biomedical workflows</li>
<li>Collaborate cross-functionally with clinical, technical, and research teams</li>
<li>Present complex analytical findings to senior stakeholders, including executive leadership</li>
</ul>
<p>About You:</p>
<ul>
<li>Required Qualifications:</li>
</ul>
<p>+ PhD in computational sciences or life sciences   + 3+ years of post-academic experience in life sciences (biotech, pharma, consulting)   + Strong programming skills, particularly in Python   + Extensive experience in multi-modal bioinformatics analysis</p>
<ul>
<li>Preferred Qualifications:</li>
</ul>
<p>+ Proven expertise in cloud computing environments, including proficiency with tabular and/or graph databases   + Strong background in machine learning and deep learning, particularly in biological applications   + Experience with large language models (LLM)   + Demonstrated ability to collaborate effectively with engineering teams on production systems   + Strong communication skills with proven ability to present complex technical findings to senior stakeholders</p>
<p>Total Compensation Range: $170,000 - $215,000</p>
<p>Where We Hire:</p>
<p>Formation Bio is prioritizing hiring in key hubs, primarily the New York City and Boston metro areas, with a hybrid model requiring 3 days per week in office. Applicants from the Research Triangle (NC) and San Francisco Bay Area may also be considered.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>PhD in computational sciences or life sciences, 3+ years of post-academic experience in life sciences (biotech, pharma, consulting), Strong programming skills, particularly in Python, Extensive experience in multi-modal bioinformatics analysis, Proven expertise in cloud computing environments, including proficiency with tabular and/or graph databases, Strong background in machine learning and deep learning, particularly in biological applications, Experience with large language models (LLM), Demonstrated ability to collaborate effectively with engineering teams on production systems, Strong communication skills with proven ability to present complex technical findings to senior stakeholders</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Formation Bio</Employername>
      <Employerlogo>https://logos.yubhub.co/formation.bio.png</Employerlogo>
      <Employerdescription>A tech and AI driven pharma company focused on accelerating drug development and clinical trials.</Employerdescription>
      <Employerwebsite>https://www.formation.bio/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/formationbio/jobs/6623947</Applyto>
      <Location>New York, NY; Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9537437b-e23</externalid>
      <Title>Staff Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help design, scale, and operate a high-impact graph data service that underpins agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll partner with a small, senior Rust-first team to ship reliable graph capabilities and make them easy for other teams and agents to use. The Knowledge Graph service is a distributed SDLC indexing system. It builds a property graph from GitLab SDLC (software development lifecycle) and code data using ClickHouse, NATS JetStream, and the Data Insights Platform. It also exposes secure graph queries and MCP tools for AI agents and product features.</p>
<p>In this role, you&#39;ll own core parts of the system end to end: shaping the architecture, hardening multi-tenant behavior and performance, and making it straightforward for other teams and agents to consume graph capabilities. In your first year, you&#39;ll take clear ownership of major areas of the service (for example, the graph query engine, SDLC indexing, or multi-tenant authorization), reduce single points of failure through better runbooks and shared context, and raise the bar on how we design, build, and operate analytical services across the stack.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the design and evolution of core Knowledge Graph services in a production Rust codebase, including the graph query engine, SDLC and code indexing pipelines, and API/MCP surfaces that other GitLab teams and AI agents rely on.</li>
</ul>
<ul>
<li>Owning complex, cross-cutting initiatives that span GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform, from technical direction and design docs through implementation, rollout, and iteration.</li>
</ul>
<ul>
<li>Driving system design decisions that improve reliability, scalability, and maintainability for analytical (OLAP-style) graph workloads. This includes multi-hop traversals, aggregations, and multi-tenant isolation. Document trade-offs so the broader team can move quickly and stay aligned.</li>
</ul>
<ul>
<li>Defining and improving operational maturity for the service, including service level objectives (SLOs), observability, runbooks, incident response, capacity planning, and production readiness (PREP) for GitLab.com, Dedicated, and Self-Managed deployments.</li>
</ul>
<ul>
<li>Collaborating asynchronously with product, data, infrastructure, security, and AI teams to sequence work, unblock platform-level dependencies, and land features in a way that is safe for customers and sustainable for the team.</li>
</ul>
<ul>
<li>Applying AI-assisted development workflows responsibly (for example, using MCP-aware tools, Knowledge Graph-backed agents, and internal Duo tooling) and help establish practical norms for how the team uses AI while maintaining strong engineering judgment.</li>
</ul>
<ul>
<li>Mentoring and supporting other engineers through pairing, technical design reviews, and knowledge-sharing, reinforcing shared ownership of the system and its operational sustainability.</li>
</ul>
<ul>
<li>Contributing across the stack when needed, including occasional Ruby (Rails integration and authorization paths) or frontend work (for example, the Software Architecture Map UI) to close gaps and keep delivery moving.</li>
</ul>
<p>This role requires significant experience building and operating production backend systems, with a track record of owning reliability, maintainability, and on-call readiness for services that support other product teams or platforms. Strong engineering skills in Rust or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive backend codebase are essential. Additionally, strong system design skills, including making and explaining clear architectural decisions, documenting constraints, and aligning trade-offs with product and platform needs, are necessary.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, ClickHouse, NATS JetStream, Data Insights Platform, graph data modeling, query patterns, property graphs, Cypher/GQL, n-hop traversals, aggregations, multi-tenant isolation, service level objectives, observability, runbooks, incident response, capacity planning, production readiness, AI-assisted development workflows, MCP-aware tools, Knowledge Graph-backed agents, internal Duo tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481945002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>66cf66eb-76e</externalid>
      <Title>Senior Machine Learning Systems Engineer</Title>
      <Description><![CDATA[<p>As a Senior Machine Learning Systems Engineer at Reddit, you will lead the development of a platform for large-scale ML models. Your primary responsibilities will include designing end-to-end model lifecycle patterns (MLOps) to boost velocity of development for ML engineers, zero-to-one development and support of a graph ML codebase and platform, collaborating with ML engineers on performance tuning, optimizing batch data processing, and architecting pipelines to build and maintain massive graph data structures.</p>
<p>To be successful in this role, you will need 5+ years of experience in ML infrastructure, including model training and model deployments, hands-on experience with ML optimization, deep experience with cloud-based technologies, and proficiency with common programming languages and frameworks of ML. You should also have strong organizational and communication skills, experience working with graph databases and graph neural networks, and a deep understanding of the machine learning development lifecycle.</p>
<p>In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$216,700-$303,400 USD</Salaryrange>
      <Skills>ML infrastructure, model training, model deployments, ML optimization, cloud-based technologies, graph databases, graph neural networks, common programming languages, frameworks of ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7731772</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b7b8d06f-881</externalid>
      <Title>Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab’s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>
<p>In this role, you’ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you’ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You’ll help reduce single points of failure with better tests and runbooks, and you’ll help the team ship analytical services that are easier to maintain and evolve over time.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and iterate on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</li>
</ul>
<ul>
<li>Help maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</li>
</ul>
<ul>
<li>Contribute to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</li>
</ul>
<ul>
<li>Improve the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</li>
</ul>
<ul>
<li>Collaborate asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</li>
</ul>
<ul>
<li>Use AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and share what works with the team while keeping a strong focus on code quality and correctness.</li>
</ul>
<ul>
<li>Participate in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</li>
</ul>
<ul>
<li>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</li>
</ul>
<ul>
<li>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</li>
</ul>
<ul>
<li>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</li>
</ul>
<ul>
<li>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</li>
</ul>
<ul>
<li>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</li>
</ul>
<ul>
<li>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</li>
</ul>
<ul>
<li>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</li>
</ul>
<ul>
<li>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</li>
</ul>
<p>About the team:</p>
<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$98,000-$210,000 USD</Salaryrange>
      <Skills>Rust, backend systems, reliability, maintainability, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8437754002</Applyto>
      <Location>Remote, Canada; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0e39aebe-3ad</externalid>
      <Title>Network Engineer - ML Infrastructure (High-Speed Interconnects)</Title>
      <Description><![CDATA[<p>We are seeking exceptional ML Infrastructure Engineers with deep expertise in high-speed interconnect technologies to design, build, and optimise the network fabric that powers large-scale AI training and inference clusters.</p>
<p>This strategic role will drive innovation in high-bandwidth, low-latency, power-efficient interconnects critical for AI/ML clusters based on advanced computing platforms. You will have the opportunity to work on all modalities of interconnects connecting GPUs and switches both inside and between data centres, including our primary front and backend networks that train Grok and that customers use for inference.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, validate, and productise high-speed copper and optical connectivity solutions for AI clusters (100k+ GPU scale).</li>
<li>Own vendor due diligence and onboarding for new 1.6T products including AEC and pluggable optical transceivers (DR4/8, FR4) including rigorous bring-up &amp; characterisation.</li>
<li>Investigate the opportunity for LPO and LRO in our network.</li>
<li>Evaluate early co-packaged and near-packaged engines for switches and GPUs.</li>
<li>Pathfinding for new interconnect modalities including VCSEL, microLED, THz radio-based solutions to improve network economics and reliability.</li>
<li>Work closely with vendors (transceiver, cable, SerDes, DSP, silicon photonics foundries) to influence roadmaps and ensure timely delivery of next-gen solutions.</li>
<li>Collaborate with ML training teams to translate workload communication patterns into concrete interconnect topology and optical reconfigurability requirements.</li>
<li>Perform system-level simulation of end-to-end fabric performance.</li>
<li>Drive failure analysis, root cause, and corrective actions for interconnect-related issues in production clusters through fleet-level metrics gathering and analysis.</li>
<li>Contribute to internal tooling and automation for interconnect health monitoring, telemetry, diagnostics, remediation and automated qualification pipelines.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>At least 8+ years of hands-on experience in designing, deploying and operating high-speed copper and optical interconnects, preferably in a module design role or in a hyperscale datacentre environment.</li>
<li>Master&#39;s or PhD degree in Electrical Engineering, Photonics or Physics.</li>
<li>Deep knowledge of PAM4 SerDes performance, equalisation, jitter, crosstalk.</li>
<li>Solid operational understanding of FEC, Retimers, TIAs and Drivers.</li>
<li>Deep knowledge of optical link budget analysis and performance metrics including TDECQ, OMA, Tcode, stressed receiver sensitivity and associated diagnostics.</li>
<li>Expertise in transceiver components including CW lasers, SiPh PICs, EML, DSP, passive subassemblies, their failure modes and characterisation.</li>
<li>Knowledge of thermal, mechanical, power, signal integrity constraints in dense hardware.</li>
<li>Knowledge of SiPh design process, yield improvement and reliability testing.</li>
<li>Familiarity with CPO technologies and challenges/risk areas.</li>
<li>Familiarity with subcomponent supply chains and global manufacturers, ODMs and CMs.</li>
<li>Strong problem-solving skills and ability to thrive in a fast-paced, ambiguous setting.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>high-speed copper and optical interconnects, PAM4 SerDes performance, equalisation, jitter, crosstalk, FEC, Retimers, TIAs, Drivers, optical link budget analysis, performance metrics, TDECQ, OMA, Tcode, stressed receiver sensitivity, associated diagnostics, CW lasers, SiPh PICs, EML, DSP, passive subassemblies, thermal, mechanical, power, signal integrity constraints, SiPh design process, yield improvement, reliability testing, CPO technologies, subcomponent supply chains, global manufacturers, ODMs, CMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The company operates with a flat organisational structure.</Employerdescription>
      <Employerwebsite>https://www.x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5043570007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>77ff2013-8f9</externalid>
      <Title>Senior Product Manager, Context Engineering</Title>
      <Description><![CDATA[<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. As a Senior Product Manager, Context Engineering, you&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins.</p>
<p>With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p><strong>The Opportunity:</strong></p>
<p>ZoomInfo built the industry&#39;s most sophisticated GTM data acquisition infrastructure. Now we&#39;re applying that same rigor to context engineering,the emerging discipline that determines whether AI systems deliver transformative value or incremental improvement.</p>
<p>This role architects the context layer powering our AI intelligence across Copilot, GTM Studio, and MarketingOS. You&#39;ll transform how ZoomInfo&#39;s agentic workflows access, compress, and deliver precisely the right information at exactly the right moment.</p>
<p>The impact is organization-wide: every AI interaction, every intelligent recommendation, every autonomous agent action depends on the context infrastructure you’ll build.</p>
<p>We&#39;ve transitioned to AI-first product thinking company-wide. The context pipelines exist but remain nascent,creating a rare opportunity to define architectural patterns and platform standards that compound value across multiple product teams in the years to come.</p>
<p><strong>What You&#39;ll Do:</strong></p>
<p>Architect Context Acquisition Pipelines</p>
<p>Design and optimize how ZoomInfo retrieves, transforms, and delivers context from our semantic data layer, memory systems, and data producers. You&#39;ll balance retrieval quality against latency and cost constraints, implementing hybrid search strategies, intelligent caching, and context compression techniques that maintain information density while respecting token budgets.</p>
<p>Own the Context Layer Platform</p>
<p>Build infrastructure serving multiple product teams,Copilot, GTM Studio, MarketingOS,as internal customers. Establish API contracts, developer experience standards, and integration patterns that accelerate feature velocity.</p>
<p>Maintain the delicate balance between providing flexible building blocks and opinionated solutions that encode best practices.</p>
<p>Drive Quality Through Measurement</p>
<p>Implement evaluation frameworks using RAGAS metrics and custom benchmarks. Monitor retrieval precision, context relevance, hallucination rates, and system performance in production.</p>
<p>Translate quality signals into architectural improvements, working closely with ML engineers to iterate on embedding models, reranking strategies, and retrieval algorithms.</p>
<p>Navigate Emerging Research</p>
<p>Context engineering evolves weekly. You&#39;ll continuously evaluate innovations,GraphRAG for multi-hop reasoning, test-time compute scaling, multimodal retrieval, compression techniques,determining which advances warrant production investment versus which remain academic curiosities.</p>
<p>Bring external best practices to ZoomInfo while contributing learnings back to the broader community.</p>
<p>Orchestrate Cross-Functional Execution</p>
<p>Translate between three distinct worlds: ML engineers optimizing retrieval algorithms, platform engineers building scalable infrastructure, and product teams shipping customer features.</p>
<p>Establish communication cadences, prioritization frameworks, and decision-making processes that balance urgent requests against strategic platform development.</p>
<p><strong>What You’ll Bring:</strong></p>
<ul>
<li>4-6 years of product management experience with 2+ years in ML/AI infrastructure</li>
</ul>
<ul>
<li>Direct experience with production RAG systems, vector databases, or semantic search, context management</li>
</ul>
<ul>
<li>Experience with graph databases (e.g. Neo4j)</li>
</ul>
<ul>
<li>Track record building platform products serving multiple internal or external customers</li>
</ul>
<ul>
<li>Familiarity with context compression, embedding models, and retrieval evaluation frameworks</li>
</ul>
<ul>
<li>History of defining product vision in nascent technical domains where best practices are still emerging</li>
</ul>
<p><strong>Who You Are:</strong></p>
<p>Technical Foundation</p>
<p>Expert-level understanding of RAG system architecture,you can discuss embedding dimensionality trade-offs, vector database indexing strategies, and reranking approaches with depth.</p>
<p>You&#39;ve built or significantly contributed to production retrieval systems, not just managed them at arm&#39;s length.</p>
<p>Python and SQL proficiency enables you to review code, analyze retrieval issues, and prototype solutions for concept validation.</p>
<p>Platform Product Mindset</p>
<p>Experience building infrastructure products where internal engineering teams are your customers.</p>
<p>You measure success through downstream product velocity improvements and developer satisfaction scores, not just uptime metrics.</p>
<p>You understand platform economics,how each additional team using your infrastructure increases its value through shared learnings and amortized costs.</p>
<p>Intellectual Velocity</p>
<p>You read recent research papers from arXiv, ACL, NeurIPS.</p>
<p>You prototype emerging techniques to understand their practical constraints.</p>
<p>You maintain strong opinions weakly held, updating your architectural assumptions as evidence accumulates.</p>
<p>The discipline moves too fast for static expertise,continuous learning is non-negotiable.</p>
<p>Strategic Communication</p>
<p>You translate between technical depth and business impact fluently.</p>
<p>You can explain to executives why implementing GraphRAG takes 6 months but unlocks $10M in product capabilities.</p>
<p>You can communicate to engineers why business constraints require shipping &#39;good enough&#39; in 3 weeks rather than &#39;optimal&#39; in 3 months.</p>
<p>You influence without formal authority through data, clear reasoning, and earned credibility.</p>
<p><strong>The Environment:</strong></p>
<p>Reporting &amp; Collaboration</p>
<p>Report to the Senior Product Director for Context Engineering, Semantic Data Layer, and Agentic Memory within ZoomInfo&#39;s Intelligence team.</p>
<p>Work alongside PMs responsible for signals and ML scoring/recommendation models.</p>
<p>Together, you ensure our agentic workflows fill context windows with high-quality, information-dense content exactly when needed.</p>
<p>Pace &amp; Problems</p>
<p>Fast-moving engineering team that understands the space.</p>
<p>Company-wide AI adoption push creates both urgency and opportunity.</p>
<p>Expect interesting problems: How do we maintain sub-200ms retrieval latency at scale?</p>
<p>When does GraphRAG justify its indexing cost?</p>
<p>How do we balance context freshness with cache efficiency?</p>
<p>You&#39;ll shape answers that become architectural patterns across the organization.</p>
<p>Impact</p>
<p>Define a nascent discipline at a company that&#39;s already AI-first in product thinking and organizational structure.</p>
<p>Your architectural decisions compound,every improvement to context quality multiplies across Copilot, GTM Studio, MarketingOS, and future products we haven&#39;t imagined yet.</p>
<p>This is infrastructure work with direct line-of-sight to customer value.</p>
<p>#LI-PS1 #LI-remote</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$89,200-$133,800 USD</Salaryrange>
      <Skills>Product Management, ML/AI Infrastructure, RAG Systems, Vector Databases, Semantic Search, Context Management, Graph Databases, Context Compression, Embedding Models, Retrieval Evaluation Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a go-to-market intelligence platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8206116002</Applyto>
      <Location>Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e7491b84-e4f</externalid>
      <Title>Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab&#39;s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>
<p>In this role, you&#39;ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you&#39;ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You&#39;ll help reduce single points of failure with better tests and runbooks, and you&#39;ll help the team ship analytical services that are easier to maintain and evolve over time.</p>
<p>Key responsibilities include:</p>
<p>Implementing and iterating on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</p>
<p>Helping maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</p>
<p>Contributing to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</p>
<p>Improving the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</p>
<p>Collaborating asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</p>
<p>Using AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and sharing what works with the team while keeping a strong focus on code quality and correctness.</p>
<p>Participating in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</p>
<p>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</p>
<p>What you&#39;ll bring:</p>
<p>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</p>
<p>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</p>
<p>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</p>
<p>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</p>
<p>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</p>
<p>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</p>
<p>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</p>
<p>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</p>
<p>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</p>
<p>About the team:</p>
<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>
<p>How GitLab Supports Full-Time Employees:</p>
<p>Benefits to support your health, finances, and well-being Flexible Paid Time Off Team Member Resource Groups Equity Compensation &amp; Employee Stock Purchase Plan Growth and Development Fund Parental leave Home office support</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>intermediate</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, backend systems, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides a suite of tools for version control, collaboration, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481958002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3f5ece56-eaa</externalid>
      <Title>Senior Machine Learning Engineer, AI Platform - PhD Early Career</Title>
      <Description><![CDATA[<p><strong>[2026] Senior Machine Learning Engineer, AI Platform - PhD Early Career</strong></p>
<p>San Mateo, CA, United States</p>
<p>Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.</p>
<p>At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device.</p>
<p>A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.</p>
<p><strong>You Will</strong></p>
<p>As a Senior Machine Learning Engineer on the AI Platform team, you will be a key contributor to building the cutting-edge systems that power AI at Roblox. You will focus on one of three high-impact tracks:</p>
<p><strong>Track 1: AI Platform Projects</strong></p>
<ul>
<li>Pioneer next-generation AI tooling to enhance the efficiency, cost, and usability of ML@Roblox.</li>
<li>Build and maintain core platform components: Serving Layer, Model Registry, Pipeline Orchestrator, and Training/Inference control planes.</li>
<li>Design great developer experiences (paved-road templates, tooling, visualizations) to reduce time-to-production and ensure foundational AI systems are scalable and reliable.</li>
</ul>
<p><strong>Track 2: Distributed Inference &amp; Systems Optimization</strong></p>
<ul>
<li>Architect and implement scalable distributed inference systems for efficiently serving LLMs and Large Recommender Models at massive scale.</li>
<li>Conduct deep, low-level performance analysis and optimize ML models (using techniques like continuous batching, speculative decoding, and quantization) and systems on GPU architectures to maintain peak performance and stability.</li>
</ul>
<p><strong>Track 3: Information Retrieval &amp; RAG for Gen AI</strong></p>
<ul>
<li>Lead the design and development of Retrieval-Augmented Generation (RAG) systems.</li>
<li>Build and maintain core information retrieval infrastructure—vector databases and knowledge graphs—to enable accurate grounding of Gen AI models.</li>
<li>Ship language models and 3D objects as a service for the Roblox community, making creation easier.</li>
</ul>
<p><strong>You Have</strong></p>
<ul>
<li>Possessing or pursuing a Ph.D. in Computer Science, Computer Engineering, Mathematics, Statistics, or a related technical field, with a thesis aligned to Roblox’s research areas.</li>
<li>Experience with high performance distributed systems, ML Infrastructure, LLM fine tuning/RL, Information Retrieval and Gen AI context generation.</li>
<li>Expertise in one or more of the following key areas:</li>
<li>AI/ML Platform Data stores - Features stores, Vector DBs and Knowledge Graphs.</li>
<li>LLMs - Fine tuning, Safety.</li>
<li>Agentic systems - Agent evaluation, context engineering.</li>
</ul>
<ul>
<li>Experience building agentic applications with context for real world applications.</li>
<li>Collaborative mindset and experience integrating and deploying optimized models with cross-functional teams, including data scientists and software engineers.</li>
<li>Experience with graph databases and large-scale GNNs (Graph Neural Networks)</li>
<li>Experience working with Kubernetes</li>
<li>Experience working with one or more cloud providers (e.g., AWS, Azure, GCP)</li>
<li>Experience working with high availability systems</li>
<li>Experience working with ML models, LLMs or other AI systems</li>
</ul>
<p>You may redact age, date of birth, and dates of attendance/graduation from your resume if you prefer.</p>
<p>As you apply, you can find more information about our process by signing up for Speak\_. You&#39;ll gain access to our practice assessment, comprehensive guides, FAQs, and modules designed to help you ace the hiring process.</p>
<p>For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits as described on <strong>this page</strong>.</p>
<p>Annual Salary Range</p>
<p>$195,780—$242,100 USD</p>
<p>Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$195,780—$242,100 USD</Salaryrange>
      <Skills>AI/ML Platform Data stores, LLMs, Agentic systems, Graph databases, Kubernetes, Cloud providers, High availability systems, ML models, LLMs, AI systems, Distributed systems, ML Infrastructure, RL, Information Retrieval, Gen AI context generation, Vector databases, Knowledge graphs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Roblox</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.roblox.com.png</Employerlogo>
      <Employerdescription>Roblox is a global online platform that allows users to create and play a wide variety of games and experiences. With tens of millions of users, it is one of the largest online gaming platforms in the world.</Employerdescription>
      <Employerwebsite>https://careers.roblox.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.roblox.com/jobs/7403998</Applyto>
      <Location>San Mateo, CA</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>