<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>dab43521-cfa</externalid>
      <Title>Software Engineer, Robotics &amp; Autonomous Systems</Title>
      <Description><![CDATA[<p>In this role, you&#39;ll be a key contributor building production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and systems for robotics data collection, teleoperation, and model evaluation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Building real-time systems for robotic control, sensor fusion, and perception pipelines</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
<li>Collaborating with ML engineers and researchers to bring robotics research into production</li>
<li>Delivering features at high velocity while maintaining system reliability and performance</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li>3+ years of software engineering experience in robotics, autonomous vehicles, or related fields</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Practical experience with robotics frameworks (ROS/ROS2), simulation environments, or AV systems</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>Python, TypeScript, Node.js, React, C++, ROS/ROS2, simulation environments, AV systems, distributed systems, workflow orchestration, cloud infrastructure, databases, data processing, robotics hardware platforms, computer vision, SLAM, motion planning, imitation learning, autonomous vehicle data, lidar technologies, 3D data processing, ML model deployment, serving frameworks, teleoperation systems, VR interfaces, workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4618065005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6fc00c5-564</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Software Engineer to join our Robotics business unit, focused on solving the data bottleneck in Physical AI across Robotics, Autonomous Vehicles, and Computer Vision. As a key contributor, you&#39;ll own and architect large-scale data processing pipelines, build ML training and fine-tuning pipelines, and develop tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation.</p>
<p>In this role, you&#39;ll interact directly with robotics and AV stakeholders to understand their technical needs and drive product development. You&#39;ll also design comprehensive monitoring and evaluation frameworks for robotics models and data quality, and collaborate with ML engineers and researchers to bring robotics research into production.</p>
<p>To succeed, you&#39;ll need at least 6 years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems. You should have strong programming skills in Python and TypeScript/Node.js for production systems, experience with React and modern frontend development for 3D interfaces, and concurrent and real-time systems expertise.</p>
<p>We&#39;re looking for someone who can deliver features at high velocity while maintaining system reliability and performance, and has a track record of working with cross-functional teams including ML engineers, researchers, and customers. Strong communication skills and the ability to operate with high autonomy are essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/Node.js, React, Concurrent and real-time systems, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing at large scale, C++, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment and serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4612282005</Applyto>
      <Location>Argentina; Uruguay</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef6605f2-fe0</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Robotics business unit. As a key contributor, you&#39;ll build production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
</ul>
<p>Ideal candidates will have:</p>
<ul>
<li>3+ years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Concurrent and real-time systems, with special attention to timing constraints</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at large scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, Node.js, C++, React, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment, Serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4655050005</Applyto>
      <Location>Mexico City, MX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a02999d2-33b</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>
<p>The impact you&#39;ll have is significant, spanning many domains across our essential service platforms. You might work on challenges such as:</p>
<ul>
<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>
</ul>
<ul>
<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>
</ul>
<ul>
<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>
</ul>
<p>What we look for in a candidate includes:</p>
<ul>
<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>
</ul>
<ul>
<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>
</ul>
<ul>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<ul>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>
</ul>
<ul>
<li>Good knowledge of SQL.</li>
</ul>
<p>Benefits at Databricks include comprehensive benefits and perks that meet the needs of all employees. For specific details on the benefits offered in your region, please click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, SQL, distributed systems, at-scale service architecture and monitoring, workflow orchestration, developer experience, cloud storage backends, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7984907002</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86696218-8f0</externalid>
      <Title>Staff Backend Engineer (Ruby on Rails/AI), Verify</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a central role in how we integrate AI into CI/CD workflows. Your work will impact performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>
<p>In this role, you&#39;ll go beyond using AI tools and help define how we design, build, and iterate on AI-assisted and agentic CI experiences. You&#39;ll set standards for what good looks like across our AI agent portfolio, including how we measure success, how we instrument behavior in production, and how we account for large language model limitations. You&#39;ll also help responsibly integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>
<p>We have ambitious goals for Agentic CI in FY27. As a Staff Engineer, you will:</p>
<ul>
<li>Partner with Engineering, Product, and UX leadership to pressure-test our priorities: where we can move faster, where we&#39;re missing data, and where there&#39;s whitespace to innovate. Part of this includes learning and growing with the Engineering team you will collaborate closely with.</li>
</ul>
<ul>
<li>Define what success looks like across our agent portfolio and make sure we&#39;re tracking against it , not just shipping, but learning.</li>
</ul>
<ul>
<li>Bring a sharp eye to the competitive landscape, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>
</ul>
<p>Examples of Agentic CI work we have planned for the upcoming year:</p>
<ul>
<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>
</ul>
<ul>
<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>
</ul>
<ul>
<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what&#39;s working, catch what isn&#39;t, and iterate with confidence.</li>
</ul>
<ul>
<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>
</ul>
<p>You&#39;ll shape and scale GitLab CI backend infrastructure to improve performance, reliability, and usability for users running jobs at high volume. You&#39;ll design and implement AI-powered features for Agentic CI, including agents, agentic flows, and LLM-backed tooling that integrates with GitLab&#39;s Duo Agent Platform. You&#39;ll define what success looks like for AI in CI before you build, including baselines, measurable outcomes, and clear signals that help the team learn and iterate. You&#39;ll build the instrumentation and observability needed to make AI-assisted CI trustworthy in production, including feature behavior metrics, dashboards, and safeguards. You&#39;ll own and drive measurable performance improvements across CI systems (for example, database access patterns, background processing, and job orchestration) by forming hypotheses, running experiments, and validating results with data. You&#39;ll write secure, well-tested, maintainable Ruby on Rails code in a large monolith, improving existing features while reducing technical debt and operational risk. You&#39;ll lead cross-functional technical work with Product, UX, and Infrastructure, influencing architecture and execution across the Verify stage. You&#39;ll share standards, patterns, and learnings with other engineers, raising the bar for responsible AI integration and evidence-driven engineering across CI.</p>
<p>This role requires advanced proficiency with Ruby and Ruby on Rails, with experience building and maintaining reliable backend services in a large codebase. You should have strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation. You should have hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains. You should have practical experience designing and shipping AI-powered backend features and integrations, including sound judgment about large language model limitations and responsible use in production. You should have a data-driven approach to engineering: defining hypotheses, establishing baseline metrics, instrumenting changes, and measuring outcomes against clear success criteria. You should have familiarity with observability patterns and tools (metrics, logging, tracing) to diagnose issues, improve reliability, and guide iteration. You should have strong backend architecture and delivery practices, including secure design, well-tested code, and strategies for safe rollouts and zero-downtime changes. You should have clear written and verbal communication skills, including writing technical proposals and documentation, and collaborating effectively in a remote, asynchronous, cross-functional environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Ruby on Rails, PostgreSQL, Data modeling, Query tuning, Scaling large tables, High-traffic production systems, CI, Workflow orchestration, Infrastructure-heavy domains, AI-powered backend features, Large language model limitations, Responsible use in production, Data-driven approach to engineering, Observability patterns, Metrics, Logging, Tracing, Backend architecture, Delivery practices, Secure design, Well-tested code, Safe rollouts, Zero-downtime changes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8448283002</Applyto>
      <Location>Remote, APAC; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US; Remote, US-Southeast</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f89bfa06-9c8</externalid>
      <Title>Staff Engineer - Salesforce Developer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Engineer to join our growing team in Business Technology (BT) and to help scale our business solutions while providing an extra focus on security, enabling Okta to be the most efficient, scalable, and reliable company.</p>
<p>In this role, you will be responsible for designing and developing customizations, extensions, configurations, and integrations required to meet the company’s strategic business objectives. You will work collaboratively with Engineering Managers, business stakeholders, Product Owners, Program analysts, and engineers on different program design, development, deployment, and support.</p>
<p>Core competencies expected of a Staff Engineer include operating with a high degree of autonomy, technical leadership, and project ownership. This includes architectural ownership and design, project and delivery leadership, mentorship and technical bar-setting, cross-functional influence, and future-forward technical skills.</p>
<p>High-value skills include the ability to build Agents using Agentforce or by leveraging open source libraries to build agents, proficiency in using GitHub Copilot or Cursor or AI workflow orchestration tools, and strategic influence on technology roadmap.</p>
<p>Qualifications include 7+ years of software development experience with experience in Java, Python, or equivalent, 5+ years&#39; hands-on Salesforce development with solid knowledge of Apex, Process Automation, and LWC, and experience in architecture, design, and implementation of various high-complexity projects/programs for Sales Cloud, CPQ, Service Cloud Console, etc.</p>
<p>Our team is collaborative, innovative, and flexible, and we consider work-life balance a top priority.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Apex, Process Automation, LWC, Agentforce, GitHub Copilot, Cursor, AI workflow orchestration tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity management and access control solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7348510</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>15bec9eb-375</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer - Backend to join our London site and contribute to our multi-year journey to build the best Lakehouse Platform. As a founding member of this team, you will be involved in the entire development cycle and exemplify all core Databricks values. Your impact will be significant, as you will work on challenges such as distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. You will also build reliable, secure and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Developing product features that empower our customers to easily store and access their data</li>
<li>Solving reliability problems related to Lakebase</li>
<li>Actively finding causes of downtime and systematically improving or removing root causes</li>
<li>Helping the org define SLIs, meet SLOs, and drive long-term reliability improvements</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>A BS degree (or higher) in Computer Science, or a related field</li>
<li>8+ years of production level experience in one of: Java, Scala, C++, or similar language</li>
<li>Experience developing large-scale distributed systems</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>
<li>Knowledge of SQL</li>
</ul>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, distributed systems, at-scale service architecture and monitoring, workflow orchestration, developer experience, cloud storage backends, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8374611002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>11099543-51f</externalid>
      <Title>Software Engineer L3 Phone Numbers</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>Join the team as Twilio&#39;s next Software Engineer L3. This position is that of a Senior Software Engineer to join Twilio&#39;s Messaging Compliance Onboarding team. Programmable Messaging is Twilio&#39;s biggest product. To keep pace with the evolving messaging compliance ecosystem, we need strong engineers that can create innovative solutions to ensure compliance with Twilio partners.</p>
<p>In this role, you&#39;ll build and maintain multiple compliance program workflows, carrier/ecosystem integrations and customer interactions in the Compliance domain. You will design and develop elegant and scalable solutions across a wide variety of compliance program types including frontend UI experiences and backend APIs, that are highly available and responsive.</p>
<p>You will work through ambiguity, deliver quickly and with a high quality. Build towards achieving the next generation of architecture vision that empowers expansion of Compliance programs. Interact cross functionally across engineering teams within Twilio to align and build the product and architecture vision.</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>5+ years of experience and a strong fundamental knowledge of software development using JVM languages.</li>
</ul>
<ul>
<li>Experience building web services incorporating best practices for external systems integration, including defensive and hardened approaches to mitigate downstream issues.</li>
</ul>
<ul>
<li>Experience working with highly scalable APIs, high volume data pipelines and large distributed systems.</li>
</ul>
<ul>
<li>Maintaining and operating cloud services.</li>
</ul>
<ul>
<li>An unwillingness to settle for &#39;good enough&#39;, instead staying focused on longevity through well-tested code and continuous improvement.</li>
</ul>
<ul>
<li>Demonstrated commitment to seeking diverse viewpoints and acting with intention to create an inclusive team environment.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills. Ability to write down and present designs and decisions throughout the development lifecycle, collaborating with engineering and non-engineering roles.</li>
</ul>
<p>Desired:</p>
<ul>
<li>5+ years of Engineering experience, developing and maintaining high traffic services.</li>
</ul>
<ul>
<li>Familiarity with DynamoDB, SQS, and data integration services like AWS glue</li>
</ul>
<ul>
<li>Familiarity with LLMs, prompt optimizations to improve model accuracy, setting up evaluation pipelines</li>
</ul>
<ul>
<li>Familiarity with Kubernetes, Temporal or similar workflow orchestration</li>
</ul>
<ul>
<li>Experience working with frontend libraries like React or similar.</li>
</ul>
<p>Compensation:</p>
<p>*Please note this role is open to candidates outside of California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Vermont, Washington D.C., and Washington State.</p>
<p>The estimated pay ranges for this role are as follows:</p>
<ul>
<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.: $138,700 - $173,400</li>
</ul>
<ul>
<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $146,800 - $183,600</li>
</ul>
<ul>
<li>Based in the San Francisco Bay area, California: $163,100 - $203,900.</li>
</ul>
<p>This role may be eligible to participate in Twilio&#39;s equity plan and corporate bonus plan. All roles are generally eligible for the following benefits: health care insurance, 401(k) retirement account, paid sick time, paid personal time off, paid parental leave.</p>
<p>Applications for this role are intended to be accepted until 4/10/2026, but may change based on business needs.</p>
<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>
<p>Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development using JVM languages, web services, external systems integration, highly scalable APIs, high volume data pipelines, large distributed systems, cloud services, well-tested code, continuous improvement, inclusive team environment, written and verbal communication skills, DynamoDB, SQS, AWS glue, LLMs, prompt optimizations, evaluation pipelines, Kubernetes, Temporal, workflow orchestration, React</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio provides software solutions for communication and develops products for businesses.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7724877</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>18ae1499-b22</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78a9b8f2-81c</externalid>
      <Title>Senior Software Engineer - Data Infrastructure</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>
<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>
<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>
<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>
<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience</li>
<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>
<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>
<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>
<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>
<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid builds tools and experiences that thousands of developers use to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7b750523-8ff</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to lead the technical strategy and implementation of our enterprise data architecture, governance foundations, and analytics enablement tooling.</p>
<p>In this role, you will be the primary engineering counterpart to the Senior Product Manager for Data Enablement &amp; Governance, jointly shaping the roadmap for enterprise analytics, shared definitions, and the tools that help Omada answer questions faster and more reliably.</p>
<p>You will design and evolve core data products, define patterns and standards used across the company, and drive the technical execution of initiatives that ensure our metrics, reports, and data products are scalable, governed, and trustworthy.</p>
<p>This is a high-impact, cross-functional Staff role working across Data Engineering, Data Science, Analytics, Product, IT, and business leaders.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>Enterprise Data Architecture</strong></p>
<ul>
<li>Own the vision and technical roadmap for Omada&#39;s enterprise data architecture, spanning ingestion, storage, modeling, and serving layers for analytics and applied statistics use cases.</li>
<li>Design, implement, and evolve scalable, secure, and cost-efficient data solutions (datalakes, warehouses, marts, semantic layers) that support governed, cross-functional analytics and self-service.</li>
<li>Define and socialize architectural patterns, data contracts, and integration standards used by data and product teams across the organization.</li>
<li>Anticipate future needs (e.g., new product lines, new modalities, AI/ML workloads) and drive proactive architectural changes rather than reacting to incidents or point-in-time requests.</li>
</ul>
<p><strong>Data Modeling, Quality, and Governance Foundations</strong></p>
<ul>
<li>Lead the design of logical and physical data models to support enterprise metrics, dashboards, and ad hoc analytics, with a focus on reusability and clear ownership.</li>
<li>Implement robust data quality, validation, and monitoring frameworks that underpin trusted “single source of truth” definitions for core concepts (e.g., active member, MAU, GLP-1 member).</li>
<li>Partner with the Senior Product Manager, Data Enablement &amp; Governance to translate governance decisions (definitions, ownership, change-management processes) into concrete technical implementations in the data platform.</li>
<li>Set standards and review mechanisms to ensure new pipelines, marts, and reports align with enterprise definitions and governance policies.</li>
<li>Continuously improve performance, scalability, and cost-efficiency of data workflows and storage; lead deep dives and remediation for complex production issues.</li>
</ul>
<p><strong>Enterprise Data Products Lifecycle</strong></p>
<ul>
<li>In close partnership with the Senior PM, define and deliver core, reusable data products (e.g., engagement, clinical, financial, client, care delivery datasets) that power dashboards, reporting, and self-service analytics.</li>
<li>Co-Architect and implement technical foundations for AI-assisted analytics tools, governed semantic layers, and reporting applications that make analysts and business users more efficient.</li>
<li>Partner with Product and Engineering teams owning tools like Amplitude, Tableau, and internal reporting tools to ensure consistent instrumentation, mapping to enterprise definitions, and scalable access patterns.</li>
<li>Translate business and product requirements into resilient schemas, data services, and interfaces that are usable, maintainable, and auditable.</li>
<li>Ensure production data delivery meets defined SLAs and supports downstream BI, reporting apps, and applied statistics workloads.</li>
<li>Play a key role in cross-functional forums (e.g., Data Governance Committee, analytics communities) as the technical voice for feasibility, risk, and long-term platform health.</li>
</ul>
<p><strong>Technical Leadership, Mentorship, and Culture</strong></p>
<ul>
<li>Lead large, multi-team technical initiatives,from design to implementation and rollout,setting a high bar for design docs, reviews, and execution quality.</li>
<li>Mentor senior and mid-level engineers, elevating the team’s skills in data modeling, pipeline design, governance, and platform thinking.</li>
<li>Help shape playbooks for how product squads and spokes engage with central data teams on new metrics, data products, and applied stats projects.</li>
<li>Partner closely with Analytics, Data Science, Product, and business leaders to ensure data architecture and governance decisions are aligned with company OKRs and measurable business value.</li>
<li>Proactively identify complexity, duplication, and fragility in existing systems; drive simplification and standardization with sustainable solutions.</li>
<li>Model Omada’s values in day-to-day work, fostering a culture of trust, context-seeking, bold thinking, and high-impact delivery.</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>8+ years of experience building, maintaining, and orchestrating scalable data platforms and high-quality production pipelines, including significant experience in analytics or warehousing environments.</li>
<li>Demonstrated Staff-level impact: leading cross-team technical initiatives, making architectural decisions that shaped a multi-year roadmap, and influencing stakeholders beyond your immediate team.</li>
<li>Deep experience with cloud data ecosystems (e.g., AWS) and modern data warehouses (e.g., Redshift, Snowflake, BigQuery), including MPP query optimization.</li>
<li>Strong background in data modeling for OLTP and OLAP, and designing reusable data products for BI, reporting, and advanced analytics.</li>
<li>Hands-on experience implementing data quality, observability, and governance frameworks, ideally in a regulated or PHI/PII-sensitive environment.</li>
<li>Experience partnering with Product Management and Analytics to define and deliver platform capabilities, not just point solutions.</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Strong proficiency in SQL (analytical and performance-tuned) and experience with relational and MPP databases.</li>
<li>Proficiency in at least one modern programming language used in data engineering (e.g., Python, Java, Scala) and comfort applying software engineering best practices (testing, CI/CD, code review).</li>
<li>Experience with workflow orchestration and data integration tools (e.g., Airflow) and event-driven or streaming patterns where appropriate.</li>
<li>Familiarity with BI and analytics tools (e.g., Tableau, Amplitude, or similar) and how they integrate with governed data layers.</li>
<li>Experience with data governance concepts (ownership, lineage, definitions, access controls) and their technical implementation in a modern data stack.</li>
<li>Familiarity with AI tools for development.</li>
</ul>
<p><strong>Communication &amp; Working Style:</strong></p>
<ul>
<li>Excellent communication and collaboration skills, with the ability to convey complex technical concepts to non-technical stakeholders.</li>
<li>Highly self-directed and comfortable operating in ambiguous, cross-functional problem spaces, creating clarity and direction where none exists.</li>
<li>Strong sense of ownership and bias for impact; you care about outcomes for members, customers, and internal users, not just elegant systems.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you recharge</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Cloud data ecosystems, Modern data warehouses, MPP query optimization, Data modeling, Data quality, Data governance, Workflow orchestration, Data integration, Event-driven or streaming patterns, BI and analytics tools, AI tools for development</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides digital therapeutics for chronic disease management.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7753330</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2cf203a5-5c5</externalid>
      <Title>Platform Engineer, Document Intelligence</Title>
      <Description><![CDATA[<p>About Hebbia</p>
<hr>
<p>The AI platform for investors and bankers that generates alpha and drives upside.</p>
<p>Founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz, Hebbia powers investment decisions for BlackRock, KKR, Carlyle, Centerview, and 40% of the world’s largest asset managers. Our flagship product, Matrix, delivers industry-leading accuracy, speed, and transparency in AI-driven analysis. It is trusted to help manage over $30 trillion in assets globally.</p>
<p>We deliver the intelligence that gives finance professionals a definitive edge. Our AI uncovers signals no human could see, surfaces hidden opportunities, and accelerates decisions with unmatched speed and conviction. We do not just streamline workflows. We transform how capital is deployed, how risk is managed, and how value is created across markets.</p>
<p>Hebbia is not a tool. Hebbia is the competitive advantage that drives performance, alpha, and market leadership.</p>
<hr>
<p>The Team</p>
<hr>
<p>The Document Intelligence team at Hebbia builds cutting-edge AI solutions that transform how users discover and interact with billions of private and public documents. Our products, including the Hebbia’s Browse application, enable intelligent document exploration, powerful search capabilities, and deep insights extraction. We focus on developing advanced data ingestion and search technologies that deliver intuitive, explainable, and highly responsive experiences. Working closely with customers, our team continuously iterates to address real-world challenges and drive impactful, data-driven decisions. Our goal is to empower users by seamlessly turning vast and complex document repositories into actionable intelligence.</p>
<hr>
<p>The Role</p>
<hr>
<p>Platform engineering at Hebbia is about excellent, scalable enablement. You are responsible for the core distributed systems that power billions of tokens across millions of dollars of AUM. You will be responsible for deploying efficient systems and building software tightly coupled with state-of-the-art infrastructure/system design. Hebbia’s edge is built on operating on the edge of the tokenomics curve and you will serve as a key contributor in this area. We value engineers who think on their feet, innovate and can solve for exponential scale.</p>
<hr>
<p>Responsibilities</p>
<hr>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<hr>
<p>Who You Are</p>
<hr>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field. A strong academic background with coursework in data structures, algorithms, and software development is preferred.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on distributed systems and platform engineering.</li>
<li>Proficiency in building backend and distributed systems using technologies such as Python, Java, or Go.</li>
<li>Deep understanding of scalable system design, performance optimization, and resilience engineering.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Knowledge of workflow orchestration and execution platforms like Airflow, Temporal or Prefect.</li>
<li>Proven experience enabling observability patterns.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams. Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<hr>
<p>Bonuses:</p>
<ul>
<li>Experience building distributed systems leveraging technologies such as etcd or Apache Zookeeper.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<hr>
<p>Compensation</p>
<hr>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<hr>
<p>Life @ Hebbia</p>
<hr>
<ul>
<li>PTO: Unlimited</li>
<li>Insurance: Medical + Dental + Vision + 401K</li>
<li>Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late</li>
<li>Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent</li>
<li>Fertility benefits: $15k lifetime benefit</li>
<li>New hire equity grant: competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>backend and distributed systems, Python, Java, Go, scalable system design, performance optimization, resilience engineering, cloud platforms, AWS, Kafka, ElasticSearch, PostgreSQL, Redis, workflow orchestration and execution platforms, Airflow, Temporal, Prefect, observability patterns, etcd, Apache Zookeeper, AI products, Cursor, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz, and powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584750005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>452a51e8-27a</externalid>
      <Title>Product Enablement Lead</Title>
      <Description><![CDATA[<p>At Flow, we&#39;re reimagining what it means to live, work, and connect. As the Product Enablement Lead, you&#39;ll play a foundational role in shaping and delivering the products that drive our communities&#39; day-to-day experience. You&#39;ll oversee the full product lifecycle,from early discovery and collaborative roadmap definition to hands-on rollout and ongoing optimization.</p>
<p>This is a unique opportunity to work across both digital and physical touchpoints, bridging strategic product thinking with deep operational execution to solve complex, real-world problems. A critical part of this role is bringing genuine AI fluency , using LLMs, automation platforms, and AI agents to multiply the team&#39;s impact and build scalable enablement infrastructure.</p>
<p>We&#39;re looking for a leader who&#39;s excited by ambiguity, obsessed with outcomes, and eager to iterate quickly with a cross-functional team of designers, engineers, and operators in-market and around the world. The right leader will balance strategy with grit and demonstrate strong communication and influence skills to ensure success.</p>
<p>Responsibilities:</p>
<ul>
<li>Influence the roadmaps for critical workflows that support both residents (“neighbors”) and internal operators across leasing, resident experience, maintenance, and financial planning</li>
<li>Lead product discovery efforts through data analysis and close collaboration with on-the-ground teams</li>
<li>Collaborate with global tech and local ops teams to ensure tools are scalable, delivered first time right, and tailored to market nuances</li>
<li>Partner with local operators and global tech teams to scale tools, improve workflows, and ensure product usability across diverse market contexts</li>
<li>Build strong feedback loops between field operators and product teams to surface unmet needs and drive continuous improvement</li>
<li>Operationalize product rollouts with strong change management, training, and performance tracking</li>
<li>Ensure tools and processes are not only functional but also resilient, scalable, and embedded in day-to-day operations</li>
<li>Champion a culture of rapid iteration, continuous learning, and tight collaboration between technical and operational teams</li>
<li>Identify automation opportunities across operator workflows and build or evaluate solutions using AI agents, LLMs, and workflow orchestration tools (e.g., n8n, Cursor, MCP)</li>
<li>Enable scale beyond human bandwidth by automating identified opportunities through repeatable processes, tooling, templates, or AI-assisted workflows, developed in collaboration with operators</li>
<li>Design and facilitate cross-functional alignment workshops to co-develop SOPs and drive process adoption at scale</li>
</ul>
<p>Ideal Background:</p>
<ul>
<li>5–8 years of product experience, ideally in complex business domains (finance, e-commerce, logistics, etc)</li>
<li>Comfortable owning complex, ambiguous problem spaces , especially those that blend tech, service, and people</li>
<li>Adept at partnering across engineering, design, and operations , with strong communication and collaboration skills</li>
<li>You think in systems, care about details, and are energized by rolling up your sleeves to make things better every day</li>
<li>Bonus: experience with real estate, logistics, marketplace, or hospitality platforms</li>
</ul>
<p>Technical Skills:</p>
<ul>
<li>Strong communicator, especially in writing clear, actionable PRDs and process documents for stakeholders</li>
<li>Deep, hands-on AI fluency: prompt engineering, AI agent design and evaluation, in-product guidance platforms, and workflow orchestration (n8n, Cursor, MCP); able to prototype and ship AI-augmented workflows, not just describe them</li>
<li>Proficient in modern product tools: JIRA, Confluence, Figma, LLMs</li>
</ul>
<p>What Sets You Apart:</p>
<ul>
<li>Understands and embraces grit</li>
<li>You thrive in ambiguity and high-pressure environments; comfortable making informed trade-offs</li>
<li>You are a low-ego individual who is excited about working in an ever-changing space</li>
<li>You are passionate about elegant design and hypothesis-driven product development</li>
<li>You are an excellent communicator and natural leader</li>
<li>You are flexible and willing to pivot quickly based on new information or shifting priorities</li>
<li>You can shift seamlessly from strategic storytelling to tactical execution</li>
</ul>
<p>Additional Information:</p>
<p>Benefits</p>
<ul>
<li>Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)</li>
<li>Paid time off and 13 paid holidays</li>
<li>401(k) retirement plan</li>
<li>Healthcare and Dependent Care Flexible Spending Accounts (FSAs)</li>
<li>Access to HSA-compatible plans</li>
<li>Pre-tax commuter benefits</li>
<li>Employee Assistance Program (EAP), free therapy through SpringHealth, acupuncture, and other wellness offerings</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong communicator, Deep, hands-on AI fluency, Prompt engineering, AI agent design and evaluation, In-product guidance platforms, Workflow orchestration, Modern product tools, JIRA, Confluence, Figma, LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Flow</Employername>
      <Employerlogo>https://logos.yubhub.co/flow.com.png</Employerlogo>
      <Employerdescription>Flow is a real estate company that operates a technology platform and operations ecosystem spanning condominiums, hotels, multifamily residences, and office spaces.</Employerdescription>
      <Employerwebsite>https://flow.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/flowlife/3e121ac0-34e7-4467-9471-e0ba9a2ea7ba</Applyto>
      <Location>Miami</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6638a9f5-f4b</externalid>
      <Title>Senior Software Engineer, Backend (Bucharest)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it&#39;s at Cresta.</p>
<p><strong>About the role:</strong></p>
<p>At Cresta, the Voice Platform team is on a mission to transform real-time voice infrastructure and contact center automation through AI-powered backend systems. As a Senior Software Engineer on the Voice Platform team, you’ll be responsible for designing, scaling, and operating the distributed services that power Cresta’s voice ecosystem. You’ll drive major initiatives in areas such as SIP and WebRTC support, multilingual and translation pipelines, and real-time conversation intelligence.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead the design and development of scalable, distributed backend microservices in Golang (with some Python for AI-related services).</li>
<li>Own and evolve voice platform integrations with large-scale enterprise communication and contact center systems.</li>
<li>Drive initiatives to expand platform capabilities, including bi-directional SIP, WebRTC integrations, multilingual support, advanced transcription, and real-time translation.</li>
<li>Build systems that power conversation intelligence for both remote and in-person interactions.</li>
<li>Improve observability, reliability, and self-service troubleshooting across the platform.</li>
<li>Ensure performance, scalability, and resilience of real-time voice pipelines running in the cloud.</li>
<li>Collaborate with cross-functional teams (ML, product, solution architects) to design end-to-end solutions for customer deployments.</li>
<li>Provide technical guidance, mentorship, and best practices to other engineers on the team.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or related field.</li>
<li>5+ years of experience in backend system development, distributed systems, or cloud infrastructure.</li>
<li>Expertise in Go (or a similar systems language) with strong API and service design skills.</li>
<li>Proven experience with scalable architectures using microservices, workflow orchestration, distributed caching, and cloud databases.</li>
<li>Strong knowledge of Kubernetes, Docker, and modern cloud infrastructure.</li>
<li>Solid understanding of networking, real-time communication protocols, and cloud security best practices.</li>
<li>Demonstrated ability to lead complex technical projects from design through production.</li>
<li>Bonus: experience with voice systems, telephony, or real-time media platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, API design, Service design, Microservices, Workflow orchestration, Distributed caching, Cloud databases, Kubernetes, Docker, Cloud infrastructure, Networking, Real-time communication protocols, Cloud security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that provides a platform combining AI and human intelligence to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4928801008</Applyto>
      <Location>Bucharest, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2c01d9b5-3e0</externalid>
      <Title>AI Engineer</Title>
      <Description><![CDATA[<p>About Belong</p>
<p>We believe in a world where homes are owned by regular people, not corporations. Our mission is to provide authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom.</p>
<p>The Role</p>
<p>Belong is looking for an AI Automation Engineer to help transform real-world operations through practical, high-impact AI solutions. You’ll be building and shipping AI-powered workflows that directly improve how our teams operate and how our customers experience Belong.</p>
<p>Responsibilities</p>
<ul>
<li>Build AI-powered applications and workflows that automate and enhance real-world business operations, including evaluation and safety mechanisms.</li>
<li>Rapidly prototype AI-driven solutions, validate them in real scenarios, and evolve them into production-ready systems.</li>
<li>Integrate AI capabilities into backend services, internal tools, and external platforms through well-designed APIs and services.</li>
<li>Own AI-driven initiatives end to end, from early experimentation to production deployment, proactively leveraging AI code generation tools to confidently contribute across the backend and frontend stack when needed.</li>
<li>Work closely with product, operations, customer support and engineering teams to identify automation opportunities and deliver meaningful impact.</li>
</ul>
<p>What We’re Looking For</p>
<ul>
<li>Strong programming skills in Python and/or TypeScript.</li>
<li>Solid software engineering fundamentals and experience building and shipping production systems.</li>
<li>Experience deploying, operating, and iterating on AI-powered applications.</li>
<li>Familiarity with modern AI tooling, agent frameworks and workflow orchestration tools.</li>
<li>A proactive mindset with a strong sense of ownership and the ability to drive initiatives forward.</li>
<li>Clear communication skills and a collaborative approach to working in cross-functional teams.</li>
</ul>
<p>Why Belong</p>
<ul>
<li>We’re tackling one of the biggest, most broken industries (housing) and creating something entirely new.</li>
<li>You’ll work alongside world-class founders and leaders who have scaled successful companies.</li>
<li>AI isn’t a side project here, it’s at the core of our strategy and product roadmap.</li>
<li>Competitive compensation, equity, and benefits.</li>
<li>Ownership, autonomy, and the opportunity to build something that matters.</li>
</ul>
<p>If you’re excited about building practical AI solutions, owning problems end to end, and pushing what’s possible in real-world operations, we’d love to talk. Apply now.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, AI tooling, Agent frameworks, Workflow orchestration tools, .NET, React, Next.js</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Belong</Employername>
      <Employerlogo>https://logos.yubhub.co/belong.com.png</Employerlogo>
      <Employerdescription>Belong is a company that provides authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom. It has over 200 employees.</Employerdescription>
      <Employerwebsite>https://www.belong.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/belong/50109bb9-7e26-4bcc-855d-87da77964fee</Applyto>
      <Location>Buenos Aires</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>50eb89c3-6b5</externalid>
      <Title>Staff Engineer MBSE</Title>
      <Description><![CDATA[<p><strong>Engineer the Future with Us</strong></p>
<p>We&#39;re seeking a Staff Engineer MBSE to join our team.</p>
<p><strong>Innovation Starts Here</strong></p>
<p>At Ansys, Part of Synopsys, we&#39;re the global leader in engineering simulation software, helping innovative companies solve complex design challenges. Our cutting-edge solutions power advancements across industries, from aerospace to consumer electronics.</p>
<p><strong>What You&#39;ll Be Doing:</strong></p>
<ul>
<li>Lead and drive technical engagements across the customer lifecycle, from problem definition through solution deployment.</li>
<li>Act as a trusted advisor to customers, driving adoption and scaling of MBSE and digital engineering practices across engineering teams and leadership levels.</li>
<li>Engage with customers to enable and guide definition of system architectures, requirements structures, and digital engineering strategies.</li>
<li>Develop and implement MBSE-driven solutions, enabling customers to adopt and scale these approaches using SysML-based system modeling approaches.</li>
<li>Drive integration of system architecture models, behavior execution engines, and engineering analyses into cohesive digital engineering workflows.</li>
<li>Define and enable implementation of end-to-end traceability frameworks across requirements, architecture, behavior, and verification.</li>
<li>Partner with product teams to influence roadmap and advance interoperability across MBSE and simulation ecosystems.</li>
</ul>
<p><strong>The Impact You Will Have:</strong></p>
<ul>
<li>Drive adoption of executable MBSE and digital engineering practices across complex organizations.</li>
<li>Influence the evolution of MBSE practices through real-world application and feedback.</li>
<li>Contribute directly to business growth through strategic, high-impact engagements.</li>
<li>Enable successful digital engineering transformations within customer environments, advancing industry capabilities.</li>
<li>Shape product direction and interoperability through collaboration with development teams and industry partners.</li>
<li>Elevate customer satisfaction and long-term partnerships through expert guidance and innovative solutions.</li>
</ul>
<p><strong>What You&#39;ll Need:</strong></p>
<ul>
<li>MS (or PhD) in Engineering, Systems Engineering, or related field.</li>
<li>5+ years of experience in systems engineering, MBSE, or system architecture development.</li>
<li>Experience with system modeling approaches (e.g., SysML or similar frameworks).</li>
<li>Experience with requirements-driven engineering and traceability.</li>
<li>Strong programming or scripting skills (Python preferred).</li>
<li>Experience integrating models, behavior, and/or engineering tools into automated workflows.</li>
<li>Strong analytical, problem-solving, and communication skills.</li>
<li>Ability to operate effectively in a customer-facing, consultative engineering role.</li>
<li>Experience with ModelCenter or similar workflow orchestration tools (preferred).</li>
</ul>
<p><strong>Who You Are:</strong></p>
<ul>
<li>Customer-focused and able to build trusted relationships.</li>
<li>Comfortable influencing and communicating with senior technical and executive stakeholders.</li>
<li>Able to bridge engineering depth with strategic decision-making.</li>
<li>Self-driven, organized, and capable of managing multiple priorities.</li>
<li>A collaborative team player who contributes to a culture of learning and innovation.</li>
</ul>
<p><strong>The Team You&#39;ll Be A Part Of:</strong></p>
<p>You will be part of a multidisciplinary engineering team focused on advancing model-based systems engineering, digital engineering, and system-level integration. The team works closely with customers, product development, and go-to-market functions to deliver scalable, high-impact solutions.</p>
<p><strong>Rewards and Benefits:</strong></p>
<p>We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>employee</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$112000-$168000</Salaryrange>
      <Skills>model-based systems engineering, system architecture, digital engineering, SysML, requirements-driven engineering, traceability, programming, scripting, Python, workflow orchestration, ModelCenter, behavior execution engines, executable MBSE, Digital Mission Engineering, system-of-systems analysis, requirements management, DOORS, Jama, Teamcenter, digital thread, digital engineering strategies, SysMLv2, interoperability frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Ansys, Part of Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Ansys, Part of Synopsys, is a global leader in engineering simulation software, helping companies solve complex design challenges.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/michigan/staff-engineer-mbse/44408/93512568928</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>1ba54c5b-026</externalid>
      <Title>Mid or Senior Full Stack Developer</Title>
      <Description><![CDATA[<p>We are seeking a talented Mid or Senior Full Stack Developer to join a focused team building the next generation of our internal content management platforms. You will be working on the CMS Proof, a critical project aimed at modernizing our editorial capabilities.</p>
<p>In this role, you will work closely with the Tech Lead (CMS Proof) and another Full Stack Developer to deliver a scalable, efficient, and forward-thinking platform. A significant part of your work will involve implementing new AI improvements to support the Editorial Efficiencies strategic initiative, helping to integrate cutting-edge AI tools directly into our content workflows.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Feature Development: Build robust, scalable features for the CMS Proof using PHP (backend) and ReactJS/NextJS (with TypeScript)(frontend), ensuring high performance and responsiveness.</li>
<li>AI Integration: Work on implementing AI-driven workflows and tools to enhance editorial efficiency, utilizing Temporal for orchestration.</li>
<li>Technical Collaboration: Collaborate closely with the Tech Lead to refine system architecture and ensure technical decisions align with the long-term product vision.</li>
<li>Code Quality: Write clean, maintainable, and well-documented code. Participate in code reviews to ensure standards are met and to mentor junior team members (for Senior applicants).</li>
<li>Testing &amp; QA: Write and maintain comprehensive test suites (unit, integration, E2E) to ensure platform stability and prevent regressions.</li>
<li>DevOps &amp; CI/CD: Utilize CI/CD pipelines for deployment and help improve developer tooling and local environment setups (containers).</li>
<li>Innovation: Participate in hack days and continuous learning to bring new ideas and technologies (specifically around AI and modern web dev) to the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Web Development: Strong experience in modern web development with PHP, NextJS/React, and TypeScript.</li>
<li>Core Technologies: Deep understanding of JavaScript and CSS fundamentals, including modern features and best practices.</li>
<li>TypeScript: Proficiency with TypeScript, including the ability to work with complex types.</li>
<li>Component Architecture: Ability to build performant, reusable UI components in modern JavaScript frameworks.</li>
<li>Backend Experience: Significant experience with API development and backend logic.</li>
<li>Performance Mindset: Ability to interpret performance metrics (e.g., Flame graphs, Core Web Vitals) to optimize applications.</li>
<li>Workflow Orchestration: Experience with, or a strong desire to learn, Temporal or similar workflow engines.</li>
<li>Data Handling: Experience retrieving and marshalling data from various sources, including databases (MongoDB) and external APIs.</li>
<li>Version Control: Good understanding of Git and collaborative workflows (e.g., GitFlow, PR reviews).</li>
<li>Communication: Ability to explain technical concepts clearly to colleagues and stakeholders.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus</li>
<li>Refer a friend and get rewarded when they join Future</li>
<li>Well-being support with access to our Colleague Assistant Programmes</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>
</ul>
<p><strong>Who We Are</strong></p>
<p>We&#39;re Future, the global leader in specialist media. With over 3,000 employees working across 200+ media brands, Future is a prime destination for passionate people worldwide looking to consume trusted, expert content that educates and inspires action - both online and off - through our specialist websites, magazines, events, newsletters, podcasts and social spaces.</p>
<p><strong>Our Future, Our Responsibility - Inclusion and Diversity at Future</strong></p>
<p>We embrace and celebrate diversity, making it part of who we are.</p>
<p>Different perspectives spark ideas, fuel creativity, and push us to innovate. That&#39;s why we&#39;re building a workplace where everyone feels valued, respected, and empowered to thrive.</p>
<p>When it comes to hiring, we keep it fair and inclusive, welcoming talent from every walk of life. It&#39;s not just about what you bring to the table — it&#39;s about making sure the table has room for everyone.</p>
<p>Because a diverse team isn&#39;t just good for business. It&#39;s the Future.</p>
<p>Find out more about Our Future, Our Responsibility on our website.</p>
<p>Please let us know if you need any reasonable adjustments made so we can give you the best experience!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid|senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>£40,000 - £55,000</Salaryrange>
      <Skills>PHP, TypeScript, ReactJS, NextJS, MongoDB, Temporal, Git, JavaScript, CSS, API development, backend logic, performance metrics, workflow orchestration, data handling, version control, communication, Symfony PHP framework, MongoDB design and optimization, Laravel framework experience, Storybook for component development, AI Tooling, Design Patterns, DevOps, Observability, Event Sourcing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/91E87BCD64</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>7f345e34-fa0</externalid>
      <Title>Software Engineering Manager</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We are seeking a Software Engineering Manager to provide engineering leadership to multiple product lines within the Ford Customer Service Division (FCSD). FCSD is a true one-stop shop, offering comprehensive diagnostics, repair, and service capabilities for a full portfolio of electrified, hybrid, and internal combustion vehicles globally.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Provide Engineering Leadership</strong></p>
<ul>
<li>Provide engineering leadership to multiple product lines within FCSD</li>
<li>Help business partners understand our iterative development approach and focus on delivering a Minimum Viable Product (MVP) and releases</li>
<li>Design and deliver industry-leading products and services to maximize value and productivity for commercial customers</li>
</ul>
<p><strong>Ensure Software Engineering Excellence</strong></p>
<ul>
<li>Ensure software engineering excellence (e.g. best practices and quality) is achieved within the FCSD Tech product line</li>
<li>Collaborate with other Product Line Anchors to reduce complexity across the portfolio, enhance interoperability between services, and build reusable API services</li>
</ul>
<p><strong>Provide Thought Leadership</strong></p>
<ul>
<li>Provide thought leadership for the development, structure, technology, and tools used within FCSD</li>
<li>Innovate and operate with an iterative, agile, and user-centric perspective</li>
</ul>
<p><strong>Communicate Technology Strategy</strong></p>
<ul>
<li>Clearly communicate technology strategy and vision to team members and internal and external stakeholders</li>
<li>Demonstrate software engineering excellence through actively coding, pairing, and performing code and architecture reviews with the software engineers within the FCSD Tech product line</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or Engineering or related</li>
<li>5+ years experience with progressive leadership responsibilities in Software Engineering, Architecture, and Agile Framework</li>
<li>Experience with Lean methodology &amp; eXtreme Programming</li>
<li>Must be able to operationalize and assist teams with abstract technology concepts</li>
<li>Strong communication, collaborative, and influencing skills</li>
<li>Proven ability to work closely with senior leadership</li>
<li>Strong personal presence and capabilities to resolve technical concerns</li>
<li>Demonstrated ability to drive development of highly technical technology services and capabilities</li>
<li>Demonstrated understanding and ability to drive API economy and solutions</li>
<li>Demonstrated understanding and ability to drive highly available consumer-ready Internet properties and technical platforms</li>
<li>Experience collaborating with engineers, designers, and product owners</li>
<li>Excellent communication skills with the ability to adapt your communication style to the audience</li>
<li>Ability to work collaboratively and navigate complex decision making in a rapidly changing environment</li>
<li>Strong leadership and communication skills and the ability to teach others</li>
<li>Experience 3+ years with building and supporting cloud-native applications leveraging Java, Spring Boot, and REACT tech stack</li>
<li>Experience with cloud services and platform knowledge</li>
<li>Modern databases (Relational and non-relational)</li>
<li>Continuous integration/continuous delivery tools and pipelines, such as Tekton, Jenkins, Terraform, SonarQube, Maven, Gradle, Harness, Apigee X, etc.</li>
<li>Experience developing and deploying to cloud platforms, such as Google Cloud Platform, Pivotal Cloud Foundry, Amazon Web Services, and Microsoft Azure</li>
<li>Experience with GCP Dataflow (Apache Beam) and workflow orchestration</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Immediate medical, dental, vision, and prescription drug coverage</li>
<li>Flexible family care days, paid parental leave, new parent ramp-up programs, subsidized back-up child care, and more</li>
<li>Family building benefits, including adoption and surrogacy expense reimbursement, fertility treatments, and more</li>
<li>Vehicle discount program for employees and family members and management leases</li>
<li>Tuition assistance</li>
<li>Established and active employee resource groups</li>
<li>Paid time off for individual and team community service</li>
<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>
<li>Paid time off and the option to purchase additional vacation time</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>This position is a range of salary grade LL6.</Salaryrange>
      <Skills>Software Engineering, Agile Framework, Lean methodology, eXtreme Programming, Java, Spring Boot, REACT, Cloud services, Platform knowledge, Modern databases, Continuous integration/continuous delivery tools, Pipelines, GCP Dataflow, Apache Beam, Workflow orchestration, Cloud-native applications, Cloud platforms, API economy, Highly available consumer-ready Internet properties, Technical platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker that designs, manufactures, and markets vehicles and automotive-related products. It is one of the largest automakers in the world.</Employerdescription>
      <Employerwebsite>https://efds.fa.em5.oraclecloud.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59597</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>da726093-b19</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Research Engineer on our team, you will work end to end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on. This may include performance optimization, distributed systems, VM/sandboxing/container deployment, and large scale data pipelines.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI.</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration.</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale projects, and we&#39;re committed to making a positive impact on the world.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $850,000 USD</Salaryrange>
      <Skills>infrastructure engineering, large-scale distributed systems, performance optimization, containerization technologies, orchestration at scale, data pipelines, distributed storage systems, complex infrastructure challenges, ML stack, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines, language model training infrastructure, distributed ML frameworks, GPU/TPU architectures, language model inference optimization, cloud platforms, VM and container orchestration, workflow orchestration tools, experiment management systems, large scale reinforcement learning, large scale data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>4b87ca93-5dc</externalid>
      <Title>Senior Research Engineer - Data</Title>
      <Description><![CDATA[<p><strong>Senior Research Engineer - Data</strong></p>
<p><strong>About the role</strong></p>
<p>The Data team manages the complete lifecycle of data for researchers - from sourcing and large-scale processing to delivering datasets that power our models. Data sits at the heart of our Research efforts and enables all other teams. As part of the Data team, you’ll work with over a million hours of video and audio data.</p>
<p><strong>This role exists at the intersection of applied research, data engineering, and ML infrastructure rather than being a traditional research position</strong>.</p>
<p>You’ll build the world’s best human-centric data lake by collaborating closely with our model training teams. By understanding their requirements, you’ll extract new features and annotations that elevate our datasets. You should be passionate about enhancing model performance through high-quality, accurate datasets. Our infrastructure and pipelines are in great shape, and this role provides room to not only enhance them but also influence the team’s longer-term strategy.</p>
<p><strong>What we&#39;re looking for:</strong></p>
<ul>
<li>A strong background in data-centric, applied Machine Learning, with hands-on experience improving model performance through data quality, curation, labeling, and evaluation rather than model architecture alone</li>
</ul>
<ul>
<li>Experience working on the data layer of Generative AI products, particularly involving images, video, or audio</li>
</ul>
<ul>
<li>Excellent Python skills, with a strong focus on writing clean, maintainable, and well-tested code</li>
</ul>
<ul>
<li>Hands-on experience designing, building, and operating workflow orchestration systems and large-scale data processing pipelines</li>
</ul>
<p><strong>Why join us?</strong></p>
<p>We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why,</p>
<p><strong>Our culture</strong></p>
<p>At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. <strong>You can find out more about these principles here.</strong></p>
<p><strong>Serving 50,000+ customers (and 50% of the Fortune 500)</strong></p>
<p>We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.</p>
<p><strong>Proprietary AI technology</strong></p>
<p>Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world-class AI researchers and engineers. Learn more about our AI Research Lab and the team behind.</p>
<p><strong>AI Safety, Ethics and Security</strong></p>
<p>AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence&#39;s impact on our society is still unfolding, our position is clear: <strong>People first. Always.</strong>  Learn more about our commitments to AI Ethics, Safety &amp; Security.</p>
<p><strong>The good stuff...</strong></p>
<ul>
<li>Competitive compensation (salary + stock options + bonus)</li>
</ul>
<ul>
<li>Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.</li>
</ul>
<ul>
<li>25 days of annual leave + public holidays</li>
</ul>
<ul>
<li>Great company culture with the option to join regular planning and socials at our hubs</li>
</ul>
<ul>
<li>\+ other benefits depending on your location</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Competitive compensation (salary + stock options + bonus)</Salaryrange>
      <Skills>Python, Machine Learning, Data Engineering, Workflow Orchestration, Large-Scale Data Processing, Generative AI, AI Research, AI Ethics, AI Safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.io.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. The company is headquartered in London, with offices and teams across Europe and the US.</Employerdescription>
      <Employerwebsite>https://www.synthesia.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/aa69627f-0c29-4416-b0e5-87bef74c768c</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9278e637-313</externalid>
      <Title>Software Engineer, Core Services</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Core Services</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Core Services team is responsible for building and managing foundational services. It acts as the bridge between core infrastructure (e.g. compute, storage, networking) and product engineering teams, and enables product teams to move fast, build reliably, and scale efficiently.</p>
<p><strong>About the Role</strong></p>
<p>As a software engineer in the core services team, you will design and operate critical backend platforms such as caching systems, workflow orchestration, metadata stores, and file services. You’ll focus on building highly reliable, scalable, and performant systems that serve as the backbone of our products.</p>
<p>We’re looking for people who are passionate about building infrastructure that empowers product teams, love working on distributed systems challenges, and enjoy creating well-designed APIs and abstractions that accelerate development.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build, and maintain shared infrastructure services such as caching layers, workflow orchestration (Temporal), metadata stores, and file storage services.</li>
</ul>
<ul>
<li>Collaborate with product teams to provide scalable, reliable primitives that abstract the complexities of distributed systems.</li>
</ul>
<ul>
<li>Improve performance, resilience, and scalability of core services that power customer-facing applications.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience with distributed systems, caching infrastructure (e.g., Redis, Memcached), metadata storage (e.g., FoundationDB), or workflow orchestration (e.g., Temporal, Cadence).</li>
</ul>
<ul>
<li>Have experience running containerized services in cloud environments and integrating them into automated build/test/release (CI/CD) workflows.</li>
</ul>
<ul>
<li>Understand trade-offs in consistency models, replication strategies, and performance optimization in multi-region systems.</li>
</ul>
<ul>
<li>Excel at communication and collaboration with cross-functional teams, and are obsessed with delivering customer success.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed systems, caching infrastructure, metadata storage, workflow orchestration, containerized services, cloud environments, automated build/test/release (CI/CD) workflows, consistency models, replication strategies, performance optimization, communication and collaboration, cross-functional teams, customer success</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/21bfde35-ffec-42d2-a2c6-8a03dad789d5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>6215398a-2c4</externalid>
      <Title>Senior Software Engineer, Forward Deployed (U.S. Public Sector)</Title>
      <Description><![CDATA[<p><strong>About Invisible</strong></p>
<p>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most.</p>
<p>Our platform cleans, labels, and structures company data so it is ready for AI. It adapts models to each business and adds human expertise when needed, the same approach we have used to improve models for more than 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere.</p>
<p>Our successes span industries, from supply chain automation for Swiss Gear to AI-enabled naval simulations with SAIC, and validating NBA draft picks for the Charlotte Hornets.</p>
<p>Profitable for more than half a decade, Invisible reached $134M in revenue and ranked as the number two fastest growing AI company on the 2024 Inc. 5000. In September 2025, we raised $100M in growth capital to accelerate our mission of making AI actually work in the enterprise and to advance our platform technology.</p>
<p><strong>About The Role</strong></p>
<p>As a Senior Forward Deployed Engineer (FDE) for our U.S. Public Sector team at Invisible, you&#39;ll lead high-impact, AI-powered solutions that reshape how our clients operate their most critical workflows. You won’t just build and deploy — you’ll drive the strategy, architecture, and execution of end-to-end systems, working directly with client stakeholders and our internal delivery teams.</p>
<p>This is a hybrid role: equal parts AI architect, hands-on engineer, and technical advisor. You’ll work on the front lines with ambitious clients, turning operational challenges into scalable AI workflows. You’ll be trusted to lead complex engagements, make architectural calls, and mentor others across technical and non-technical domains.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Scope, design, and lead implementation of AI-driven solutions in partnership with delivery teams and executive stakeholders</li>
<li>Translate ambiguous workflows and business needs into repeatable systems and production-ready technical architectures</li>
<li>Lead architecture design and trade-off discussions across performance, scalability, cost, and reliability</li>
<li>Build usable systems from messy data and incomplete or evolving requirements</li>
<li>Apply AI/ML solutions in highly regulated environments (e.g., defense, intelligence, healthcare, finance)</li>
<li>Own projects end-to-end—from initial discovery and scoping through implementation, deployment, and post-launch iteration</li>
<li>Build shared infrastructure, reusable components, and internal playbooks to improve delivery consistency and team velocity</li>
<li>Mentor mid-level engineers and contribute to the development of forward-deployed AI engineering practices at Invisible</li>
</ul>
<p><strong>What We Need</strong></p>
<ul>
<li>Active U.S. Department of Defense Secret clearance or higher</li>
<li>5+ years of software engineering experience, including work on data-intensive, ML, or backend systems</li>
<li>Ability to work on-site 2–3 days per week at offices located in the greater Washington, D.C. and Reston, VA area</li>
<li>Python &amp; ML/LLM frameworks: Hands-on experience with Python and modern ML/LLM tooling (e.g., Hugging Face, LangChain, OpenAI, Pinecone)</li>
<li>Deployment &amp; infrastructure: Experience building and operating API-based services using Docker, FastAPI, Kubernetes, and major cloud platforms (AWS, GCP)</li>
<li>Platform &amp; data management: Familiarity with workflow orchestration, pub/sub systems (e.g., Kafka), schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, and replay processes</li>
<li>Experience leading requirements-gathering activities and translating stakeholder input into technical specifications</li>
</ul>
<p><strong>What’s In It For You</strong></p>
<p>Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.</p>
<p>For this position, the annual salary ranges by location are:</p>
<p>Tier 2 Salary Range $164,000 – $240,000USD</p>
<p>You can find more information about our geographic pay tiers here. During the interview process, your Invisible Talent Acquisition Partner will confirm which tier applies to your location. For candidates outside the U.S., compensation is adjusted to reflect local market conditions and cost of living.</p>
<p>Bonuses and equity are included in offers above entry level. Final compensation is determined by a combination of factors, including location, job-related experience, skills, knowledge, internal pay equity, and overall market conditions. Because of this, every offer is unique. Additional details on total compensation and benefits will be discussed during the hiring process</p>
<p><strong>What It&#39;s Like to Work at Invisible:</strong></p>
<p>At Invisible, we’re not just redefining work—we’re reinventing it. We operate at the intersection of advanced AI and human ingenuity, pushing the boundaries of what’s possible to unlock productivity and scale. Ownership is at the core of everything we do. Here, you won’t just execute tasks—you’ll build, innovate, and shape the future alongside world-class clients pushing the boundaries of AI.</p>
<p>We expect bold ideas, relentless drive, and the ability to turn ambiguity into opportunity. The pace is fast, the challenges are big, and the growth is unmatched. We’re not for everyone, and we’re okay with that. If you’re looking for predictable routines, this isn’t the place for you. But if you’re driven to create, thrive in dynamic environments, and want a front-row seat to the AI revolution, you’ll fit right in.</p>
<p>_<strong>Country Hiring Guidelines:</strong>_ _Invisible is a hybrid organization with offices and team members located around the world. While some roles may offer remote flexibility, most positions involve in-office collaboration and are tied to specific locations. Any location-based requirements will be clearly outlined in the job description._</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$164,000 – $240,000USD</Salaryrange>
      <Skills>Python, ML/LLM frameworks, Docker, FastAPI, Kubernetes, AWS, GCP, workflow orchestration, pub/sub systems, schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, replay processes, Hugging Face, LangChain, OpenAI, Pinecone</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Invisible Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/invisible.co.png</Employerlogo>
      <Employerdescription>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most. Our platform cleans, labels, and structures company data so it is ready for AI.</Employerdescription>
      <Employerwebsite>https://www.invisible.co/join-us/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.eu.greenhouse.io/invisibletech/jobs/4741723101</Applyto>
      <Location>Washington DC–Baltimore</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>