<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>5ab009a1-7b6</externalid>
      <Title>Engineering Manager - Mercury Accounting</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineering leader who is excited to lead their team to execute on our vision for real-time accounting and own a critical part of Mercury&#39;s product growth.</p>
<p>As part of this role, you will:</p>
<ul>
<li>Lead a team of 6+ exceptional software engineers, growing the team through performance management, mentorship, and hiring.</li>
<li>Contribute to a team culture that aligns to Mercury&#39;s values.</li>
<li>Help define technical architecture for ingesting financial data from hundreds of sources and streaming them to Mercury&#39;s accounting service.</li>
<li>Work closely with designers, product leaders, and other cross-functional stakeholders to shape product strategy and execute on the Accounting team roadmap.</li>
<li>Collaborate with other engineering teams on overlapping work, optimizing for product cohesion and simplicity over strict division of responsibility.</li>
</ul>
<p>The ideal candidate for the role:</p>
<ul>
<li>Has 3+ years of engineering management leading full-stack product engineering teams, especially teams that have leveraged AI to build software.</li>
<li>Has made architectural decisions in the past and measured the impact of those decisions over time. You should be able to clearly articulate your technical opinions and lay out tradeoffs.</li>
<li>Has a strong sense of technical and product ownership and actively seeks responsibility – our engineers often act as product owners on small/medium projects, and we want someone who’s excited to help shape the future of our accounting products; previous experience in fintech/accounting or building consumer-facing AI products is a nice-to-have.</li>
<li>Is comfortable driving discussions in areas with ambiguous ownership, approaches them with empathy, and prioritizes reaching outcomes.</li>
</ul>
<p>Our work overlaps with many other teams – you’ll have a lot of autonomy in this role and are expected to use it to seek out ways to have an impact.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>The total rewards package at Mercury includes base salary, equity (stock options), and benefits.</Salaryrange>
      <Skills>engineering management, full-stack product engineering, AI, technical architecture, financial data ingestion, streaming, fintech, accounting, consumer-facing AI products</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/demo.mercury.com.png</Employerlogo>
      <Employerdescription>Mercury provides real-time accounting services to businesses. It has a team of engineers working on its product.</Employerdescription>
      <Employerwebsite>https://demo.mercury.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5979496004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9f17f9e3-2bd</externalid>
      <Title>Senior Software Engineer - Backend (Platform)</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer on the Platform team, you&#39;ll design and build services that autonomously control satellites, monitor telemetry for anomalies, and provide real-time situational awareness to keep our fleet safe and online. You&#39;ll also be building the core components and services that power the rest of our software organization, enabling every team to move faster and more reliably.</p>
<p>This role will contribute to both our commercial and US government programs. You will be shaping the foundation of software that has to work flawlessly – because our satellites depend on it.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build high-performance, reliable, mission-critical software that is used to send commands to space</li>
<li>Take full ownership of features, working across backend and infrastructure</li>
<li>Collaborate with multidisciplinary teams to define software requirements, architectures, and designs</li>
<li>Continuously assess the evolving tech landscape and advocate for innovations that will improve our system</li>
<li>Mentor teammates, share knowledge, and help raise the technical bar</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>6+ years of professional experience as a software engineer</li>
<li>Bachelor&#39;s degree in Computer Science or related technical field</li>
<li>Strong proficiency in Python</li>
<li>Experience with distributed systems and microservice architectures</li>
<li>Experience and understanding of databases (Postgres, etc)</li>
<li>Experience and understanding of pub/sub and streaming systems (RabbitMQ, Flink, etc)</li>
<li>Track record of delivering high-impact features and improvements in a collaborative environment</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with Kubernetes</li>
<li>Experience introducing metrics and monitoring to increase system stability</li>
<li>Proficiency in C++, Go, Rust</li>
<li>Experience building fleet management systems</li>
</ul>
<p><strong>What we offer</strong></p>
<p>All our positions offer a compensation package that includes equity and robust benefits. Base pay is just one component of Astranis&#39;s total rewards package. Your compensation also includes a significant equity package via incentive stock options, high-quality company-subsidized healthcare, disability and life insurance, 401(k) retirement planning, flexible PTO, and free on-site catered meals.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$145,000-$255,000 USD</Salaryrange>
      <Skills>Python, Distributed systems, Microservice architectures, Databases, Pub/sub and streaming systems, Kubernetes, C++, Go, Rust, Fleet management systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from some of the world&apos;s best investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4597208006</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0b7e613b-976</externalid>
      <Title>Software Engineer - Backend (Platform)</Title>
      <Description><![CDATA[<p>As a Software Engineer on the Platform team, you&#39;ll design and build services that autonomously control satellites, monitor telemetry for anomalies, and provide real-time situational awareness to keep our fleet safe and online. You&#39;ll also be building the core components and services that power the rest of our software organization, enabling every team to move faster and more reliably.</p>
<p>This role will contribute to both our commercial and US government programs. You will be shaping the foundation of software that has to work flawlessly – because our satellites depend on it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build high-performance, reliable, mission-critical software that is used to send commands to space</li>
<li>Take full ownership of features, working across backend and infrastructure</li>
<li>Collaborate with multidisciplinary teams to define software requirements, architectures, and designs</li>
<li>Continuously assess the evolving tech landscape and advocate for innovations that will improve our system</li>
<li>Mentor teammates, share knowledge, and help raise the technical bar</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2-5 years of professional experience as a software engineer</li>
<li>Bachelor’s degree in Computer Science or related technical field</li>
<li>Strong proficiency in Python</li>
<li>Experience with distributed systems and microservice architectures</li>
<li>Experience and understanding of databases (Postgres, etc)</li>
<li>Experience and understanding of pub/sub and streaming systems (RabbitMQ, Flink, etc)</li>
<li>Track record of delivering high-impact features and improvements in a collaborative environment</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Experience with Kubernetes</li>
<li>Experience introducing metrics and monitoring to increase system stability</li>
<li>Proficiency in C++, Go, Rust</li>
<li>Experience building fleet management systems</li>
</ul>
<p>What we offer:</p>
<p>All our positions offer a compensation package that includes equity and robust benefits. Base pay is just one component of Astranis’s total rewards package. Your compensation also includes a significant equity package via incentive stock options, high-quality company-subsidized healthcare, disability and life insurance, 401(k) retirement planning, flexible PTO, and free on-site catered meals.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000-$170,000 USD</Salaryrange>
      <Skills>Python, Distributed systems, Microservice architectures, Databases, Pub/sub and streaming systems, Kubernetes, C++, Go, Rust, Fleet management systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4622097006</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>40c3120b-89c</externalid>
      <Title>Database Reliability Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented Database Reliability Engineer to join our team. As a Database Reliability Engineer, you will be responsible for ensuring the rock-solid reliability of our existing RDS footprint. This includes architecting automated strategies for seamless, multi-version upgrades and proactive performance tuning to minimize downtime across hundreds of instances.  Our ideal candidate will have extensive experience working with Postgres and a passion for running stateful workloads natively on Kubernetes. They will also have a natural &quot;reluctance for manual implementation&quot; and believe that infrastructure should be managed entirely via code, using Terraform to provision the foundation and custom APIs to handle the orchestration.  The successful candidate will be excited by the challenge of &quot;multi-everything&quot;,multi-tenant, multi-region, and multi-cloud,while ensuring rigorous data integrity and mobility. They will also believe security is paramount and focus on building deep observability (Prometheus/Grafana/OpenTelemetry/Humio) and automated guardrails so the fleet is secure by design without requiring manual intervention.  As a Database Reliability Engineer, you will work closely with our Data teams to deliver meaningful and impactful insights to both the business and our customers. You will also have the opportunity to contribute to the development of our data layer and help shape the future of our technology stack.  In return for your hard work and dedication, we offer a competitive salary and benefits package, including 25 days holiday, an extra day&#39;s holiday for your birthday, and a generous family-friendly policy. We also offer a range of training and development opportunities to help you grow your skills and advance your career.  If you are passionate about working with databases and are looking for a challenging and rewarding role, please apply now.  <strong>Responsibilities</strong>  <em> Modernize and Scale the RDS Fleet </em> Architect Cross-Cloud Portability <em> Evolve Observability &amp; Monitoring </em> Support Replication &amp; Mobility <em> Fortify Business Continuity (BCP)  <strong>Requirements</strong>  </em> PostgreSQL &amp; Kubernetes Expert <em> Systems Thinker </em> Distributed Systems Enthusiast <em> A Security &amp; Observability Mindset </em> Engineering via Code  <strong>Nice to Have</strong>  <em> Experience with Terraform and custom APIs </em> Familiarity with Prometheus, Grafana, OpenTelemetry, and Humio <em> Knowledge of cloud-native patterns and provider-agnostic deployment </em> Experience with data streaming and &quot;Zero-Downtime&quot; migration strategies <em> Familiarity with Business Continuity Planning and Disaster Recovery strategies  <strong>What We Offer</strong>  </em> Competitive salary and benefits package <em> 25 days holiday </em> An extra day&#39;s holiday for your birthday <em> Generous family-friendly policy </em> Range of training and development opportunities * Opportunity to contribute to the development of our data layer and shape the future of our technology stack  <strong>How to Apply</strong>  If you are passionate about working with databases and are looking for a challenging and rewarding role, please apply now. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and benefits package&quot;,   &quot;salaryMin&quot;: 95000,   &quot;salaryMax&quot;: 120000,   &quot;salaryCurrency&quot;: &quot;GBP&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>PostgreSQL, Kubernetes, Terraform, Custom APIs, Prometheus, Grafana, OpenTelemetry, Humio, Cloud-native patterns, Provider-agnostic deployment, Data streaming, Zero-Downtime migration strategies, Business Continuity Planning, Disaster Recovery strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that provides banking services to customers in the UK. It has over 3,000 employees across its offices in London, Southampton, Cardiff, and Manchester.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/CCC0F3F287</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2c452765-84f</externalid>
      <Title>Site Reliability Data Engineer</Title>
      <Description><![CDATA[<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world&#39;s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>
<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you&#39;ll find smart people who have fun, learn, innovate, and help others do the same.</p>
<p>We respect everyone, we hire the best, and make sure every experience is special.</p>
<p>As a Site Reliability Data Engineer based in Athens, you will play a critical role in ensuring the reliability, scalability, and performance of our data infrastructure and pipelines. You will collaborate closely with engineering teams to build and operate robust cloud-based systems, driving automation and observability across our platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Build, operate, and improve ETL/ELT pipelines, Spark workloads, and data warehouse components.</li>
<li>Develop tools and automations to simplify and harden data pipeline workflows and general operations.</li>
<li>Design, implement, and maintain scalable, highly available cloud infrastructure and services with a focus on automation and reliability.</li>
<li>Develop and operate observability tooling for monitoring, logging, tracing, and data-pipeline metrics (freshness, completeness, latency, error rates).</li>
<li>Collaborate with development teams to instrument, deploy, and troubleshoot production systems across microservices on Kubernetes.</li>
<li>Operate, deploy, and monitor data infrastructure and cloud services from development to production.</li>
<li>Own availability, scalability, and performance of systems, focusing on data pipelines and warehousing components.</li>
<li>Partner with peer SREs to roll out production changes and mitigate data-related and infrastructure incidents.</li>
<li>Troubleshoot issues across data pipelines and production systems; support capacity planning and analyze system and data workflow performance.</li>
<li>Provide data engineering expertise to engineering teams and work cross-functionally with developers and analysts on designing, releasing, and troubleshooting production systems.</li>
<li>Own team projects and ensure timely delivery.</li>
</ul>
<p>Requirements</p>
<ul>
<li>BS/MS degree in Computer Science, Engineering, or equivalent practical experience</li>
<li>2+ years of experience in site reliability engineering, data engineering, or a closely related role, including programming</li>
<li>Experience with a major cloud provider (AWS or GCP)</li>
<li>Hands-on experience with infrastructure-as-code or configuration management tools (Terraform or Ansible)</li>
<li>Experience with ETL/ELT concepts and tools (Airflow or dbt)</li>
<li>Experience with Apache Spark or similar distributed data processing frameworks</li>
<li>Experience with cloud data warehouses (BigQuery, Redshift, or Snowflake)</li>
<li>Proficiency in at least one programming language (Python, Go, or Scala)</li>
<li>Excellent written English proficiency</li>
<li>Legally authorized to work in Greece</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Production experience with Kubernetes</li>
<li>Experience with centralized monitoring and logging systems</li>
<li>Experience with streaming systems (Kafka or Spark Streaming)</li>
</ul>
<p>Benefits</p>
<p>Our employees enjoy benefits that make them more productive and contribute directly to the development of their professional skills. We want to be able to attract the best of the best and make sure they keep getting better. On top of an exciting, vibrant and intellectually challenging environment, we are offering:</p>
<ul>
<li>Comprehensive Health Coverage: A robust health insurance plan that includes coverage for your dependents.</li>
<li>Competitive Compensation: An attractive salary paired with a performance-based bonus plan.</li>
<li>Flexible Work Model: Enjoy the best of both worlds with a hybrid setup,two days working from home and three in the office.</li>
<li>Top-Tier Tools: Apple gear and access to the latest productivity tools to help you excel.</li>
<li>Stay Connected: A mobile data plan to keep you online wherever you are.</li>
<li>Delicious Perks: Fresh, tasty food at the office to fuel your productivity.</li>
<li>Relocation Bonus: To help you settle in smoothly in Athens.</li>
</ul>
<p>Workable is most decidedly an equal opportunity employer. We want applicants of diverse background and hire without regard to colour, gender, religion, national origin, citizenship, disability, age, sexual orientation, or any other characteristic protected by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud computing, Data engineering, ETL/ELT, Apache Spark, Cloud data warehouses, Kubernetes, Infrastructure-as-code, Configuration management, Observability tooling, Monitoring, Logging, Tracing, Data-pipeline metrics, Production experience with Kubernetes, Centralized monitoring and logging systems, Streaming systems (Kafka or Spark Streaming)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Workable</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Workable provides a comprehensive, all-in-one HR suite for over 31,000 growing businesses and HR teams.</Employerdescription>
      <Employerwebsite></Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/273C8E852D</Applyto>
      <Location>Athens</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8bd53be2-6cf</externalid>
      <Title>Senior Site Reliability Data Engineer</Title>
      <Description><![CDATA[<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world’s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>
<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you’ll find smart people who have fun, learn, innovate, and help others do the same.</p>
<p>We respect everyone, we hire the best, and make sure every experience is special.</p>
<p>As a Senior Site Reliability Data Engineer based in Athens, Greece, you will play a critical role in ensuring the reliability, scalability, and performance of Workable&#39;s data and cloud infrastructure. This is a high-impact position where your expertise will directly influence the operational excellence and growth of our data platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, build, and maintain core data engineering infrastructure including ETL/ELT pipelines, Apache Spark workloads, and data warehouse systems.</li>
</ul>
<ul>
<li>Ensure availability, scalability, and performance of data infrastructure and pipelines with deep operational ownership.</li>
</ul>
<ul>
<li>Design, implement, and maintain scalable reliability tooling and automation to streamline deployment, monitoring, and incident response across distributed services.</li>
</ul>
<ul>
<li>Operate and optimize Kubernetes-based cloud infrastructure to ensure high availability, performance, and cost-efficiency.</li>
</ul>
<ul>
<li>Partner cross-functionally with developers and analysts to design, release, and troubleshoot production systems; provide data engineering expertise.</li>
</ul>
<ul>
<li>Lead cross-functional projects with development teams to improve system reliability, automate capacity planning, and enforce SRE best practices.</li>
</ul>
<ul>
<li>Develop and maintain centralized observability, including logging, metrics, tracing, and alerting pipelines; continuously improve incident detection and response workflows.</li>
</ul>
<ul>
<li>Own observability for data pipelines (freshness, completeness, latency, error rates) and ensure SLOs are met.</li>
</ul>
<ul>
<li>Plan platform growth and manage capacity for the data platform and related infrastructure.</li>
</ul>
<ul>
<li>Operate, deploy, and monitor data platform components and broader cloud services from development through production.</li>
</ul>
<ul>
<li>Develop tools and automation to simplify data operations and make deployments more robust and self-service.</li>
</ul>
<ul>
<li>Collaborate with peer SREs to roll out production changes and mitigate data/infrastructure incidents.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, ETL/ELT pipelines, cloud data warehouses, major cloud provider, infrastructure automation tools, centralized logging, monitoring, observability frameworks, production experience with Kubernetes, streaming systems, data quality, data observability tooling, relational and NoSQL databases, proficiency in programming languages, deep knowledge of Linux systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Workable</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Workable provides a comprehensive, all-in-one HR suite for over 31,000 growing businesses and HR teams.</Employerdescription>
      <Employerwebsite></Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/22CEAF6027</Applyto>
      <Location>Athens</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>af746432-e09</externalid>
      <Title>VP, Senior Full-Stack Engineer (Java &amp; Angular)</Title>
      <Description><![CDATA[<p>Are you interested in building innovative technology that shapes the financial markets? Do you like working at the speed of a startup, and tackling some of the world&#39;s most interesting challenges? At BlackRock, we are looking for Software Engineers who like to innovate and solve complex problems.</p>
<p>We recognize that strength comes from diversity, and will embrace your unique skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual.</p>
<p>Aladdin by BlackRock manages over $30 trillion (USD) in assets, and its engineers have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to achieve their investments objectives, save for retirement, pay for college, buy a home, and improve people&#39;s financial well-being.</p>
<p>This role will be responsible for all aspects of software development, testing and ensuring compatibility with enterprise and solutions architecture by harnessing modern development technologies.</p>
<p>The position is for a Vice President within the Investment and Trading engineering team within Aladdin Engineering and is responsible for delivering software solutions leveraged by Portfolio Managers, Traders, Researchers, Risk Managers, Compliance Officers and Investment Operations.</p>
<p>We are passionate about building quality software and scalable technology to meet the needs of tomorrow. We have strong Java expertise and work with a range of technologies such as Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Angular and many others. We are committed to open source, and contributing back to the community. We write testable software every day, with a focus on agile innovation.</p>
<p>The team is looking for an ambitious hands-on senior software engineer to work on an exciting strategic product to expand our Aladdin Portfolio Management capabilities. Working with a global team and be a part of an outstanding group of engineers setting, evolving the technology direction of our upcoming suit of applications for Portfolio Management. Passionate about multiple aspects of enterprise software development – Performance, Scale, Resilience, Usability and Maintainability. As a key member of our engineering team, you will be encouraged and empowered to bring your ideas forward to help shape the technical solutions. Making you become a strong team player in our distributed and diverse global team. You also have opportunities to present your innovative ideas to leaders across the firm.</p>
<p>Responsibilities include:</p>
<ul>
<li>Develop and maintain institutional grade investment functionalities used by portfolio managers</li>
</ul>
<ul>
<li>Help design and build the next generation of world-class investment platform</li>
</ul>
<ul>
<li>Contribute to an agile development team working with designers, product managers, users</li>
</ul>
<ul>
<li>Quality-first mind-set - apply quality software engineering practices through all phases of development and into production</li>
</ul>
<ul>
<li>Collaborate with team members in a multi-office, multi-country, global team environment.</li>
</ul>
<ul>
<li>Ensure resilience, stability, and high-performance of software delivery through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support.</li>
</ul>
<ul>
<li>Nurture the talent around you and lead by example.</li>
</ul>
<ul>
<li>Being in a senior position, people would look up to you, and you would be responsible for driving an inclusive and competitive culture in the team.</li>
</ul>
<p>Competencies include:</p>
<ul>
<li>Passionate about technology, user experience, with personal ownership for the work you do</li>
</ul>
<ul>
<li>Curious and eager to learn new business domain and tech skills, and willing to challenge the status quo</li>
</ul>
<ul>
<li>Know how to leverage AI tools to increase your productivity</li>
</ul>
<ul>
<li>Willing to embrace work outside of your comfort zone, and open to guidance from others</li>
</ul>
<ul>
<li>Data and quality focused, with an eye for the details that make great solutions</li>
</ul>
<ul>
<li>You are always willing to learn from any issues/incidents, try to continuously improve</li>
</ul>
<ul>
<li>Experienced working in either Portfolio Management or Trading segments</li>
</ul>
<ul>
<li>Knowledgeable in Trading, Equity, FI, OTC, Exchange Traded Derivatives, Prime Brokerage, Compliance, and Portfolio Management processes.</li>
</ul>
<p>Experience and Qualifications:</p>
<ul>
<li>Designed and engineered enterprise financial solutions in production with a strong foundation in Java and related technologies</li>
</ul>
<ul>
<li>Experience with distributed caching &amp; computing, real-time, and highly scalable technologies (such as Apache Ignite, Kafka, Redis) and modern front-end web development (such as Micro-frontends, Web-streaming, Angular/React, Type Script).</li>
</ul>
<ul>
<li>Passionate about creating the best user experience</li>
</ul>
<ul>
<li>B.E. or M.S. degree in Computer Science, Engineering or a related discipline</li>
</ul>
<ul>
<li>Excellent analytical, problem-solving and communication skills</li>
</ul>
<ul>
<li>An ability to apply modern tech solutions to solve investment and trading problems</li>
</ul>
<ul>
<li>A track record of forging strong relationships and building trusted partnerships through open dialogue and continuous delivery</li>
</ul>
<ul>
<li>Experience working with UX designers, product managers, technical/enterprise leads, and architects across the SDLC lifecycle; understanding of systems requirements, design, development, testing, deployment and documentation</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Certification (e.g., CFA) or passion in investment/portfolio management/trading processes</li>
</ul>
<ul>
<li>Experience with MSSQL or Apache Cassandra Database</li>
</ul>
<ul>
<li>Experience with Cloud platforms such as Microsoft Azure</li>
</ul>
<ul>
<li>Experience with AI models and tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Angular, Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Micro-frontends, Web-streaming, Type Script, Apache Ignite, Redis, UI/UX, APIs, gRPC, Proto-buffs, Spring, Node.JS</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment products and services to institutional and retail clients. It has over $30 trillion in assets under management.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/65fGJ5np3dAFaJEGL4T3Py/vp%2C-senior-full-stack-engineer-(java-%26amp%3B-angular)-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6d7c5a10-755</externalid>
      <Title>Software Engineer: AdTech</Title>
      <Description><![CDATA[<p>As the gaming industry shifts towards a live service-driven model, creating an engaging Ads experience and connecting relevant brands / advertisers to players is the key to EA&#39;s success. The AdTech team within EA&#39;s Dynamic Experience group is on a journey to build industry-leading solutions that empower e2e Ads lifecycle management workflows and performant, scalable and available services.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Architect and build a Video Ads SDK for HD gaming platforms, including PlayStation, Xbox, Switch, and PC.</li>
<li>Champion the SDK&#39;s integration with leading game engines like Frostbite, Unreal, and Unity.</li>
<li>Cultivate partnerships with EA Game Studios and serve as a technical expert to lead the SDK&#39;s adoption.</li>
<li>Foster a developer community by ensuring our technology is a seamless part of their game development workflow.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>7+ years of professional software engineering experience, with expert-level proficiency in C++</li>
<li>Expertise in streaming protocols (HLS, DASH) and knowledge of video codecs (H.264, VP9) for efficient HD video delivery</li>
<li>Expertise in at least one major game engine (Unreal Engine, Unity, or Frostbite), with an ability to integrate SDKs while maintaining a high frame rate in AAA games</li>
<li>Experience with network programming, including HTTP, RESTful APIs, and implementing communication protocols between the SDK and ad servers</li>
<li>Experience building and maintaining a single SDK that works across multiple platforms (PC, console, mobile)</li>
</ul>
<p>This is a hybrid remote/in-office role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Streaming protocols (HLS, DASH), Video codecs (H.264, VP9), Game engines (Frostbite, Unreal, Unity), Network programming (HTTP, RESTful APIs)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher with a portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-AdTech/212114</Applyto>
      <Location>Stockholm</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bc7820fb-e11</externalid>
      <Title>Senior Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>EA SPORTS is one of the leading sports entertainment brands in the world, with top-selling videogame franchises, award-winning interactive technology, fan programs, and cross-platform digital experiences.</p>
<p>As one of the largest sports entertainment platforms in the world, EA SPORTS FC is redefining football with genre-leading interactive experiences, connecting a global community of fans to The World&#39;s Game through innovation and unrivaled authenticity.</p>
<p>With more opportunity than ever to design, innovate and create new, immersive experiences that bring joy, inclusivity, and connection to fans everywhere, we invite you to join our passionate and dynamic team as we pioneer the future of football fandom.</p>
<p>Senior Full Stack Engineers are key drivers behind the technology that powers our games and the experiences millions of players love. In this role, you’ll design and build scalable backend services, data pipelines, and APIs that move and process massive amounts of real-time game data. As part of the Gameplay Advance team at EA, you’ll work alongside world-class engineers to push gameplay technology forward,solving complex technical challenges, influencing architecture, and helping shape the future of how our games are built and played.</p>
<p>Your Responsibilities:</p>
<ul>
<li>Troubleshoot and resolve complex production issues quickly across the full technology stack (backend, frontend, and data).</li>
</ul>
<ul>
<li>Design, develop, and maintain scalable backend systems and data pipelines using Python and modern frameworks.</li>
</ul>
<ul>
<li>Setup and optimize data streaming solutions to ensure real-time data processing and reliability.</li>
</ul>
<ul>
<li>Design, build, and maintain cloud-hosted data pipelines and services used in production</li>
</ul>
<ul>
<li>Collaborate closely with cross-functional partners, including central platform teams and DevOps, to deliver reliable solutions</li>
</ul>
<p>Your Qualifications:</p>
<p>Please note that you do not need to qualify for all requirements to be considered. We encourage you to apply if you can meet most of the requirements and are comfortable opening a dialog to be considered.</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, or 8+ years of hands-on professional software development experience</li>
</ul>
<ul>
<li>Experience building, shipping, and supporting scalable, cloud-hosted services</li>
</ul>
<ul>
<li>Proficiency in multiple programming languages and frameworks, including Python and C++</li>
</ul>
<ul>
<li>Strong understanding of client/server architectures, HTTP, RESTful APIs, and WebSocket-based data streaming</li>
</ul>
<ul>
<li>Experience contributing to modern web application frontends</li>
</ul>
<ul>
<li>Hands-on experience with machine learning data frames, including tools like Polars</li>
</ul>
<ul>
<li>Experience deploying and operating services using Docker and Kubernetes</li>
</ul>
<ul>
<li>Experience with at least one major public cloud platform (GCP, AWS, Azure)</li>
</ul>
<ul>
<li>Working knowledge of modern database technologies</li>
</ul>
<ul>
<li>Proficiency with source control systems such as Git or Perforce</li>
</ul>
<ul>
<li>Experience load testing, troubleshooting, and optimizing cloud service performance</li>
</ul>
<ul>
<li>Ability to learn quickly and apply new technologies</li>
</ul>
<ul>
<li>Bonus: Experience with animation systems or pipelines</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,300 - $170,700 CAD</Salaryrange>
      <Skills>Python, C++, client/server architectures, HTTP, RESTful APIs, WebSocket-based data streaming, machine learning data frames, Docker, Kubernetes, public cloud platform, modern database technologies, source control systems, Git, Perforce, load testing, troubleshooting, optimizing cloud service performance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading sports entertainment brand with top-selling videogame franchises and award-winning interactive technology.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer/212471</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2e1b76db-851</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Senior Software Engineer, you will lead the delivery of critical systems and services. You will collaborate across teams to build scalable, reliable, and efficient solutions and help shape engineering best practices.</p>
<p>The Data &amp; Insights (D&amp;I) Data Group develops a unified Big Data pipeline across all franchises at Electronic Arts. Our live service platform incorporates data collection, ingestion, processing, real-time streaming analytics, access, and visualisation - all built on a modern, cloud-based tech stack with modern tools. The Data Group provides the tools and platform that power the future of game development, marketing, sales, accounting, and customer experience.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the design, development, and operation of complex, scalable systems and services with high reliability and performance requirements.</li>
<li>Oversee major services, ensuring their long-term maintainability, scalability, and operational health.</li>
<li>Drive system architecture and design discussions, influencing technical direction with different teams.</li>
<li>Build large-scale data pipelines and real-time streaming systems using modern distributed technologies.</li>
<li>Implement monitoring, alerting, and observability practices.</li>
<li>Identify technical debt, driving improvements in system quality, performance, and developer productivity.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>7+ years of professional software engineering experience building and operating large-scale systems</li>
<li>Proficiency in Java</li>
<li>Experience designing and building scalable backend systems and APIs</li>
<li>Hands-on experience with data pipelines, real-time streaming technologies (e.g., Kafka, Flink, Storm), or large-scale data processing systems</li>
<li>Experience working with cloud platforms (preferably AWS) and distributed infrastructure</li>
<li>Understanding of system reliability, observability, and performance optimization techniques</li>
<li>Experience with database technologies (relational, NoSQL, or columnar) and data modelling at scale</li>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>
<li>Experience with CI/CD systems and modern software development practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$141,400 - $204,400 CAD</Salaryrange>
      <Skills>Java, data pipelines, real-time streaming technologies, cloud platforms, distributed infrastructure, database technologies, containerization and orchestration tools, CI/CD systems, modern software development practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher headquartered in Redwood City, California. The company has a diverse portfolio of games and experiences across various platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Sr-Software-Engineer/213715</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0317f83c-08d</externalid>
      <Title>Sales Enablement Specialist (AI &amp; Automation)</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Sales Enablement Specialist to join our team in London. As a Sales Enablement Specialist, you will partner with global account leadership to identify workflow inefficiencies and improvement opportunities. You will design and optimize sales processes to increase productivity and account growth. You will develop enablement materials, playbooks, and training to support new tools and ways of working. You will track performance metrics and provide insight-led recommendations to support strategic decision-making.</p>
<p>In addition, you will identify opportunities to embed AI into account planning, forecasting, reporting, pricing analysis, customer insights, and internal coordination. You will implement and manage AI-powered tools that improve data visibility, response times, and strategic insight. You will automate repetitive administrative and reporting tasks to allow the account team to focus on high-value commercial activity. You will evaluate emerging AI solutions and recommend practical applications relevant to the global account. You will support effective adoption and responsible use of AI across the account team.</p>
<p>As a Sales Enablement Specialist, you will act as a liaison between Sales, Operations, Finance, and Technology functions supporting the global account. You will improve reporting frameworks and dashboard visibility to support executive-level reviews. You will support governance, documentation, and process consistency across regions. You will lead change management initiatives related to new tools and workflow improvements.</p>
<p>The ideal candidate will be commercially astute and results-oriented. They will be a strategic thinker with a continuous improvement mindset. They will be curious and proactive in identifying AI innovation opportunities. They will be comfortable driving change within established teams. They will be highly organized with strong attention to detail. They will be able to work under pressure and to deadlines. They will have experience and technical knowledge of the PC Gaming or PC Hardware market, which is preferred but not required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, Automation, Sales Enablement, Data Analysis, Process Improvement, PC Gaming, PC Hardware, Cloud Computing, Digital Streaming, Artificial Intelligence</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Amazon Global</Employername>
      <Employerlogo>https://logos.yubhub.co/corssair.com.png</Employerlogo>
      <Employerdescription>Amazon Global is a multinational technology company that focuses on e-commerce, cloud computing, digital streaming, and artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.corssair.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://edix.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/8747</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>77076fa7-89a</externalid>
      <Title>Conferencing/Audio Visual (AV) Administrator</Title>
      <Description><![CDATA[<p>We are seeking an Audio Visual (AV) Administrator to help build, configure, and maintain Replit&#39;s conferencing and audio visual infrastructure at Replit&#39;s Foster City, CA office and future offices throughout the United States. The successful candidate will also support the technical needs of Replit&#39;s social media team&#39;s livestreaming, podcasting, and online media programs. You will be the subject matter expert and primary owner of all conferencing and shared meeting spaces across our offices. You oversee the operations of Zoom Rooms alongside our specialized multimedia and large-scale gathering areas.</p>
<p>Configure, maintain, and continuously monitor over 100 conferencing spaces ensuring maximum uptime and excellent user experience Deploy and troubleshoot conferencing hardware based on Neat and comparable hardware ecosystems Oversee AV operations for event spaces. Manage and operate audio/video setups for company-wide All-Hands meetings ensuring high-quality broadcasts Administer and maintain AV control systems, DSPs, amplifiers, microphones, and content distribution systems Support hardware and software associated digital signage and digital content distribution Assist in the technical maintenance and setup of specialized studio spaces dedicated to live streaming, podcasting, and video editing Conduct regular sweeps and health checks of all AV equipment. Serve as subject matter expert and escalation point for AV-related IT tickets Create user guides and standard operating procedures (SOPs) to train staff on how to operate rooms and equipment Work with project managers, integrators, vendors and Replit teams to manage design, upgrades and spec new room builds as we continue to expand Partner with Replit&#39;s Workplace Experience team to ensure optimal usage of meeting spaces Provide white glove support in Executive conference spaces and customer briefing center</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130K - $175K</Salaryrange>
      <Skills>AV administration, Zoom Rooms, Neat hardware, DSPs, control systems, digital signage, live streaming, podcasting, video editing, Google Meet Rooms, Microsoft Teams Rooms, theatre, musical performance, road crew member, technical director</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language, with millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://replit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/e1c59a55-1103-4c58-afc3-ddcd200550b4</Applyto>
      <Location>Foster City, CA (Hybrid) In office M,W,F</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9eb594a6-97b</externalid>
      <Title>Product Manager 3</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to drive Data Insights and Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Champion customer-facing product development that will reduce time to insights.</li>
<li>Own the cradle to grave product lifecycle for insights platforms.</li>
<li>Understand the needs of our end customers in the global communications market and build a platform to help internal teams manage and leverage their data to derive meaningful insights.</li>
<li>Support Data Governance initiative for data pipelines and insights products, working with product managers and engineering counterparts across various organizations and stakeholders.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, customer engagement platforms, streaming applications, Kafka, ElasticSearch, Clickhouse, Spark, Presto/Athena, cloud, APIs, communications, enterprise software, data reliability, ETL techniques, collaborative approach, ability to work with distributed, cross-functional teams, great communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424471</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b5023ab2-eae</externalid>
      <Title>TL, Research Inference</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$380K – $555K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Foundations team focuses on how model behavior changes as we scale models, data, and compute. The team studies the interactions between model architecture, optimization, and training data, and uses those insights to guide how new models are designed and trained.</p>
<p><strong>About the Role</strong></p>
<p>In this role, you will build the systems that enable advanced AI models to run efficiently at scale. You will operate at the intersection of model research and systems engineering, translating new architectural ideas into high-performance inference systems that surface real tradeoffs in performance, memory, and scalability.</p>
<p>Your work will directly influence how models are designed, evaluated, and iterated on across the research organization. By developing and evolving high-performance inference infrastructure, you will enable researchers to explore new ideas with a clear understanding of their computational and systems implications.</p>
<p>This is not a product-serving role. Instead, it is a research-enabling systems role focused on performance, correctness, and realism - ensuring that AI research is grounded in what can actually scale.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and build high-performance inference runtimes for large-scale AI models, with a focus on efficiency, reliability, and scalability.</li>
</ul>
<ul>
<li>Own and optimize core execution paths, including model execution, memory management, batching, and scheduling.</li>
</ul>
<ul>
<li>Develop and improve distributed inference across multiple GPUs, including parallelism strategies, communication patterns, and runtime coordination.</li>
</ul>
<ul>
<li>Implement and optimize inference-critical operators and kernels informed by real-world workloads.</li>
</ul>
<ul>
<li>Partner closely with research teams to ensure new model architectures are supported accurately and efficiently in inference systems.</li>
</ul>
<ul>
<li>Diagnose and resolve performance bottlenecks through profiling, benchmarking, and low-level debugging.</li>
</ul>
<ul>
<li>Contribute to observability, correctness, and reliability of large-scale AI systems.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience building production inference systems, not just training or running models.</li>
</ul>
<ul>
<li>Are comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs.</li>
</ul>
<ul>
<li>Have worked on multi-GPU or distributed systems involving batching, scheduling, or runtime coordination.</li>
</ul>
<ul>
<li>Can reason end-to-end about inference pipelines, from request handling through execution and output streaming.</li>
</ul>
<ul>
<li>Are able to understand research ideas and implement them within real system and performance constraints.</li>
</ul>
<ul>
<li>Enjoy solving hard, ambiguous systems problems that only emerge at scale.</li>
</ul>
<ul>
<li>Prefer hands-on technical ownership and execution over abstract design work.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Required Skills</strong></p>
<ul>
<li>Experience building production inference systems, not just training or running models</li>
</ul>
<ul>
<li>Comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs</li>
</ul>
<ul>
<li>Multi-GPU or distributed systems involving batching, scheduling, or runtime coordination</li>
</ul>
<ul>
<li>Reasoning end-to-end about inference pipelines, from request handling through execution and output streaming</li>
</ul>
<ul>
<li>Understanding research ideas and implementing them within real system and performance constraints</li>
</ul>
<ul>
<li>Solving hard, ambiguous systems problems that only emerge at scale</li>
</ul>
<ul>
<li>Hands-on technical ownership and execution over abstract design work</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Experience working with large-scale AI models</li>
</ul>
<ul>
<li>Distributed inference across multiple GPUs</li>
</ul>
<ul>
<li>Parallelism strategies, communication patterns, and runtime coordination</li>
</ul>
<ul>
<li>Implementing and optimizing inference-critical operators and kernels</li>
</ul>
<ul>
<li>Observability, correctness, and reliability of large-scale AI systems</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$380K – $555K</Salaryrange>
      <Skills>Experience building production inference systems, not just training or running models, Comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs, Multi-GPU or distributed systems involving batching, scheduling, or runtime coordination, Reasoning end-to-end about inference pipelines, from request handling through execution and output streaming, Understanding research ideas and implementing them within real system and performance constraints, Solving hard, ambiguous systems problems that only emerge at scale, Hands-on technical ownership and execution over abstract design work, Experience working with large-scale AI models, Distributed inference across multiple GPUs, Parallelism strategies, communication patterns, and runtime coordination, Implementing and optimizing inference-critical operators and kernels, Observability, correctness, and reliability of large-scale AI systems, Mental health and wellness support, Employer-paid basic life and disability coverage, Annual learning and development stipend to fuel your professional growth, Daily meals in our offices, and meal delivery credits as eligible, Relocation support for eligible employees, Additional taxable fringe benefits, such as charitable donation matching and wellness stipends</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/50aab80a-fa60-4fcc-882d-18ea76db5f11</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>859c75b7-6fc</externalid>
      <Title>Engineering Manager, Multimodal (API)</Title>
      <Description><![CDATA[<p>We are seeking an Engineering Manager to lead our multimodal API product suite. Your team will be responsible for delivering innovative APIs across real-time processing, speech transcription, speech generation, and image creation.</p>
<p>You will own the product roadmap for how we evolve our multimodal API offerings, and you will build the products that allow developers to reach millions of end users through AI audio, video, and images.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, mentor, and grow a high-performing engineering team focused on multimodal API products – including our real-time API, our transcription models (Whisper), our speech generation models (TTS), and our image generation APIs (DALLE and native 4o).</li>
<li>Collaborate closely with product managers, designers, and other stakeholders to define the strategic vision and product roadmap.</li>
<li>Work closely with our research teams to improve our core multimodal models for API customer use cases.</li>
<li>Guide technical and architectural decisions, emphasizing scalability, robustness, and user experience.</li>
<li>Foster a culture of innovation, continuous improvement, and accountability within your team.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Proven experience managing engineering teams that deliver complex, high-quality products at scale.</li>
<li>Strong technical background and proficiency in modern software engineering practices and system architecture.</li>
<li>Excellent collaboration and communication skills to effectively coordinate across diverse teams and stakeholders.</li>
<li>Familiarity with or strong interest in multimodal AI, including speech technologies, real-time systems, and image generation.</li>
<li>Ability to operate effectively in a fast-paced, ambiguous startup environment.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience developing multimodal systems or APIs in AI/ML domains, especially around image generation, audio generation, or speech transcription.</li>
<li>Familiarity with real-time streaming technologies, audio processing, and computer vision.</li>
<li>Hands-on experience with cloud platforms and distributed architectures.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$293K – $385K</Salaryrange>
      <Skills>multimodal AI, speech technologies, real-time systems, image generation, cloud platforms, distributed architectures, audio generation, speech transcription, real-time streaming technologies, audio processing, computer vision</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/1d7f4747-54a3-4141-a39a-c6e7700e969b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f41f576e-c80</externalid>
      <Title>Enterprise Support Engineer</Title>
      <Description><![CDATA[<p>As an Enterprise Support Engineer at OpenRouter, you will serve as the technical anchor for our largest and most critical customers. This role exists at the intersection of Engineering, Support, and Customer Success. You will investigate root causes, distinguish between platform latency and upstream model provider errors, and help developers stabilize their AI applications. You will partner with Account Managers and Software Engineers to ensure our customers rely on OpenRouter as a stable, transparent, and critical part of their infrastructure.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning the technical resolution for inbound enterprise inquiries and incidents.</li>
<li>Analyzing logs, stack traces, and API usage patterns to identify whether errors originate from the customer&#39;s implementation, OpenRouter&#39;s infrastructure, or upstream providers.</li>
<li>Isolating reported bugs by creating minimal reproduction scripts to confirm defects before engaging the engineering team.</li>
<li>Assessing incoming enterprise requests not just by technical severity, but by business impact.</li>
<li>Assisting Account Managers in preserving account health by providing technical data for Quarterly Business Reviews (QBRs).</li>
<li>Providing clear, accurate, and calm updates to stakeholders during service disruptions.</li>
<li>Converting your investigations into public documentation, internal troubleshooting playbooks, and automated remediation tools.</li>
<li>Acting as the voice of the enterprise customer, channeling recurring friction points and feature requests back to the Product and Engineering teams.</li>
</ul>
<p>You will bring:</p>
<ul>
<li>3–5+ years of experience in an external-facing support role within a B2B SaaS or API-first environment.</li>
<li>Deep familiarity with RESTful APIs, HTTP status codes, Server-Sent Event streaming, authentication methods (OAuth, Bearer tokens), and tools like Postman or cURL.</li>
<li>Ability to read, interpret, and debug code in at least one common programming language in use by our customers (Python, TypeScript/Node.js, Go, Java, etc).</li>
<li>Experience querying logging and monitoring platforms (e.g., Datadog, Grafana, Cloudflare logs, or GCP Cloud Logging) to trace request lifecycles.</li>
<li>Basic proficiency with SQL or similar query languages for investigations.</li>
</ul>
<p>You will actively use AI, going beyond simple support. You are genuinely enthusiastic about leveraging LLMs for debugging, workflow automation, and unique problem-solving, seeing AI as a utility to eliminate drudgery.</p>
<p>You maintain composure during outages and complex troubleshooting sessions, prioritizing systematic investigation and analysis.</p>
<p>You have a genuine interest in &#39;white box&#39; troubleshooting, and you are comfortable digging into the source of the problem rather than applying a workaround.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>RESTful APIs, HTTP status codes, Server-Sent Event streaming, authentication methods (OAuth, Bearer tokens), Postman or cURL, Python, TypeScript/Node.js, Go, Java, SQL or similar query languages, logging and monitoring platforms (e.g., Datadog, Grafana, Cloudflare logs, or GCP Cloud Logging)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenRouter</Employername>
      <Employerlogo>https://logos.yubhub.co/openrouter.com.png</Employerlogo>
      <Employerdescription>OpenRouter provides an open AI routing and infrastructure layer for enterprises to access, manage, and optimize large language models across providers.</Employerdescription>
      <Employerwebsite>https://openrouter.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openrouter/506a2013-df65-4233-8b1d-fdd81a34d729</Applyto>
      <Location>Remote (US)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>18203ae4-a51</externalid>
      <Title>Founding Product Designer</Title>
      <Description><![CDATA[<p><strong>The Opportunity</strong></p>
<p>Fifth Dimension is the most prominent AI company in real estate in the world, bringing the magic of AI automation to document-heavy industries. Our AI workspace helps leading investment managers, owners, and operators across the US, UK, and APAC automate complex workflows, extract insights from critical documents, and make faster, smarter decisions.</p>
<p><strong>The Challenge</strong></p>
<p>We&#39;re leaving a huge amount of value uncaptured because we haven&#39;t yet taken a design-led view of the product. What should the product look like to immediately communicate intelligence and connectivity? How do the first 30 seconds of a sales demo show we&#39;re an intelligence platform, not a chatbot? What&#39;s the UI that makes users want to share their work and pull their colleagues in?</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and ship. You own the design layer end-to-end. User research through to pixel-perfect delivery. There is no handoff. You&#39;ll work shoulder-to-shoulder with engineering in London, prototype rapidly, and use AI tools to collapse the gap between idea and implementation. We ship daily. You will too.</li>
</ul>
<ul>
<li>Define the language. You&#39;ll establish the visual standards, interaction patterns, component systems, and design principles the product grows into. Typography, hierarchy, density, motion, data visualisation. You set the bar and hold it.</li>
</ul>
<ul>
<li>Invent AI-native interaction. Agentic workflows, streaming outputs, document intelligence, confidence signals, human-AI collaboration. There&#39;s no playbook for this. You&#39;ll be writing it. Trust, explainability, responsive agentic UI, designing intelligence. These are the problems that will define your best work.</li>
</ul>
<ul>
<li>Talk to customers. Visit their offices. Watch them work. Run usability sessions. Sit with a portfolio manager and understand how they think about a $500M real estate portfolio. There is no research team. You are the research team.</li>
</ul>
<ul>
<li>Design to drive growth. The first 30 seconds of a demo, the onboarding flow that lands a new team, the interface that makes a customer expand from one use case to five. Your design decisions directly drive revenue. You&#39;ll see that in the numbers, not just the Figma comments.</li>
</ul>
<ul>
<li>Shape what gets built. You won&#39;t just execute on someone else&#39;s vision. You&#39;ll influence what the product becomes, not just how it looks. The best design work here is upstream, shaping product direction alongside the CPO and engineering.</li>
</ul>
<ul>
<li>Raise the bar. Help product engineers develop stronger design instincts. Champion frontend quality, accessibility, and the details that separate good products from great ones. Make everyone care more about craft.</li>
</ul>
<p><strong>About You</strong></p>
<p>You&#39;re a designer who ships. Exceptional taste, fast hands, strong opinions. You&#39;re equally comfortable in Figma and on a customer call. You don&#39;t wait for permission and you don&#39;t need months of discovery to produce something tangible. People around you would say you operate at a level above your title.</p>
<p>You&#39;ve designed complex, data-rich interfaces for demanding professional users. Tables, dashboards, document viewers, multi-step workflows. You know what it means to design something 200 people use for 8 hours a day. Real enterprise tools, not consumer apps, not marketing sites.</p>
<p>You use AI tools to amplify your output and you&#39;re excited about what that means for the craft of design. You&#39;ve designed AI interfaces: trust signals, agentic workflows, streaming UIs, human-AI collaboration. In production or as a side project, you&#39;ve been exploring by building.</p>
<p>You have sharp product instinct. You shape what gets built, not just how it looks. You make trade-offs, back them up, and run your own user research. You don&#39;t need a research team to understand your users.</p>
<p>You thrive with autonomy. Ambiguity doesn&#39;t scare you, it&#39;s where the interesting work lives. You&#39;re energised by early-stage: building something from zero, not inheriting a playbook.</p>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Deep experience in product design, with a portfolio that shows real taste and craft in complex information design</li>
</ul>
<ul>
<li>Shipped data-rich interfaces in production: tables, dashboards, document viewers, multi-step workflows for professional users</li>
</ul>
<ul>
<li>Runs your own user research: interviews, usability tests, site visits. Energised by sitting with domain experts</li>
</ul>
<ul>
<li>Systems thinker, designs in components and patterns, can build a design system from scratch</li>
</ul>
<ul>
<li>Proficient with AI tools in their own workflow, uses them to move faster, not as a novelty</li>
</ul>
<ul>
<li>At least one startup or early-stage experience where they owned design end-to-end</li>
</ul>
<ul>
<li>Experience designing AI interfaces, trust signals, agentic workflows, streaming UIs, human-AI collaboration patterns</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Has been a founding or sole designer before, knows the &#39;build from zero&#39; experience</li>
</ul>
<ul>
<li>Former founder, built businesses, wants to build product without the CEO overhead</li>
</ul>
<ul>
<li>Adjacent domain knowledge, financial services, legal tech, data platforms</li>
</ul>
<ul>
<li>Visible in the design community, writes, speaks, shares work publicly</li>
</ul>
<ul>
<li>Can touch code, enough to prototype or unblock themselves</li>
</ul>
<p><strong>Compensation</strong></p>
<p>We are a hybrid London office, everyone comes in every Wednesday, ideally more.</p>
<ul>
<li>Base: GBP 90,000-120,000</li>
</ul>
<ul>
<li>Meaningful equity on a standard vesting schedule</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and meaningful equity</Salaryrange>
      <Skills>Product design, Complex information design, Data-rich interfaces, User research, AI tools, Agentic workflows, Streaming outputs, Document intelligence, Confidence signals, Human-AI collaboration, Design systems, Component-based design, Pattern-based design, Frontend development, Backend development, Cloud computing, DevOps, Cybersecurity, Data science, Machine learning</Skills>
      <Category>Design</Category>
      <Industry>Technology</Industry>
      <Employername>Fifth Dimension</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.fifthdimensionai.com.png</Employerlogo>
      <Employerdescription>Fifth Dimension is the most prominent AI company in real estate, bringing AI automation to document-heavy industries.</Employerdescription>
      <Employerwebsite>https://careers.fifthdimensionai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.fifthdimensionai.com/jobs/7398322-founding-product-designer</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2b611428-9cc</externalid>
      <Title>Staff Software Engineer, AI Experiences</Title>
      <Description><![CDATA[<p>You&#39;ll lead the strategy, planning, and execution of Gamma&#39;s AI Product Experiences, including our cutting-edge deck generation that allows users to create presentations from a single prompt, our image generation and editing features, and Gamma Agent, our presentation assistant.</p>
<p>This means shaping the future of how millions of users interact with AI to bring their ideas to life while establishing technical direction for our most ambitious AI initiatives.</p>
<p>As a Staff Engineer focused on AI Product Experiences, you&#39;ll balance hands-on engineering with strategic leadership. You&#39;ll elevate engineering quality across the team through code review, design feedback, and mentorship while proactively identifying opportunities and misalignment within EPD.</p>
<p>You&#39;ll bring both data-driven rigor and strong intuition to decision-making, designing systems that balance security, usability, and performance.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Ship production code while maintaining strategic perspective, focusing on high-leverage, architecturally challenging work</li>
</ul>
<ul>
<li>Partner with EM, PM, and cross-functional leads to set the roadmap and drive technical decision-making</li>
</ul>
<ul>
<li>Elevate engineering quality and effectiveness, setting technical direction and raising the bar through code review and mentorship</li>
</ul>
<ul>
<li>Proactively identify opportunities and misalignment within EPD and the roadmap, helping resolve them</li>
</ul>
<ul>
<li>Design systems that balance security, usability, and performance while building delightful user experiences</li>
</ul>
<ul>
<li>Operate in both data-driven, hypothesis-testing mode and act on strong intuition, switching back and forth effectively</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software engineering experience with at least 1 year building with AI generative technologies (LLMs)</li>
</ul>
<ul>
<li>Prior experience working with CRDTs/YJS, LLMs and image models, WebSockets, and streaming</li>
</ul>
<ul>
<li>Familiarity with context engineering, agent development, and AI patterns (RAG, embeddings, subagents, tool calling)</li>
</ul>
<ul>
<li>Deep expertise building complex, collaborative, real-time web apps with TypeScript and React</li>
</ul>
<ul>
<li>Strong sense of craft with drive to build delightful experiences</li>
</ul>
<ul>
<li>Strong communication skills and experience influencing technical strategy across teams</li>
</ul>
<ul>
<li>High EQ with empathetic, reflective, self-aware, growth mindset that actively promotes psychological safety for the team</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $340K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K - $340K</Salaryrange>
      <Skills>AI generative technologies, CRDTs/YJS, LLMs and image models, WebSockets, streaming, context engineering, agent development, AI patterns, TypeScript, React</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a technology company that specialises in AI product experiences.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/08b56086-395d-4b8a-8cf0-256618a1b7bc</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>64780097-d2c</externalid>
      <Title>Software Engineer, Backend</Title>
      <Description><![CDATA[<p>You&#39;ll build and scale the backend systems that power millions of users creating content every day on Gamma. This role is about solving real distributed systems challenges at scale while maintaining the performance and reliability users expect from a modern AI-powered product. You&#39;ll work across the full stack, shipping features that directly impact how people create and share their ideas.</p>
<p>While this role is backend focused, you&#39;ll work across the entire product with our frontend, product, and design teams. Our full TypeScript stack is built on modern technologies including React, Node.js, PostgreSQL, Redis, and cutting-edge AI models.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Scale backend systems to hundreds of millions of users while maintaining high performance and availability</li>
<li>Build and optimize APIs that power real-time collaborative editing and AI content generation</li>
<li>Design and implement distributed systems that handle massive scale with reliability</li>
<li>Ship features across the full stack, working closely with frontend engineers to deliver polished experiences</li>
<li>Architect solutions for complex technical challenges in areas like data consistency, caching, and query optimization</li>
<li>Collaborate with product and design to turn ideas into production-ready features</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>3+ years building production backend systems with strong fundamentals in distributed systems, databases, and API design</li>
<li>Deep proficiency in TypeScript/Node.js or similar backend languages, with eagerness to work in our TypeScript stack</li>
<li>Experience scaling systems to handle millions of users and high throughput workloads</li>
<li>Strong understanding of PostgreSQL, Redis, or similar database technologies</li>
<li>Passion for building APIs, scaling complex systems, and creating excellent web applications</li>
<li>Curiosity and attitude that matches your technical knowledge</li>
<li>Prior experience working with websockets, streaming, or scaling inference workloads (Nice to have)</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $180K - $275K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $275K</Salaryrange>
      <Skills>TypeScript, Node.js, PostgreSQL, Redis, API design, Distributed systems, Database design, Websockets, Streaming, Inference workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a modern AI-powered product with millions of users creating content every day.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/fb12356a-e868-4a4a-801c-882a6b0ac83f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0fc4cba7-bdb</externalid>
      <Title>Software Engineer, Trust &amp; Safety</Title>
      <Description><![CDATA[<p>As a Trust &amp; Safety Engineer at Gamma, you&#39;ll architect systems to protect the platform from phishing, abuse, fraud, and malicious content. You&#39;ll join a small team building the foundation for trust at scale, designing infrastructure that serves millions, shipping improvements daily, and building internal tools for support teams.</p>
<p>You&#39;ll own detection and prevention systems for fraud, abuse, spam, and malicious content across millions of daily users. You&#39;ll design and build scalable trust infrastructure, including rate limiting, content scanning, anomaly detection, and account security. You&#39;ll also build tools that empower internal support teams to investigate and act on suspicious or malicious activity.</p>
<p>To succeed in this role, you&#39;ll need 5+ years of experience as a software engineer with a focus on abuse, spam, or fraud prevention. You&#39;ll have strong systems thinking and experience building highly available web APIs, with proficiency in at least one programming language and comfort working across a modern web stack. You&#39;ll also have hands-on experience implementing trust features like rate limiting, content detection, and fraud prevention.</p>
<p>Experience with event streaming systems, passion for security, user protection, and solving problems at scale are also essential. Familiarity with AI/LLMs for content moderation, TypeScript, Prisma, Apollo GraphQL, or AWS is a plus.</p>
<p>In terms of compensation, the base salary for this full-time position ranges between $180K - $275K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $275K</Salaryrange>
      <Skills>abuse prevention, spam prevention, fraud prevention, rate limiting, content scanning, anomaly detection, account security, event streaming systems, AI/LLMs for content moderation, TypeScript, Prisma, Apollo GraphQL, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is an AI platform that tackles challenges related to user safety and creative freedom.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/274f4b95-d167-4ff2-ba98-89d2d6f114a3</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>03eeeb10-f6a</externalid>
      <Title>Staff Software Engineer, Full Stack</Title>
      <Description><![CDATA[<p>You&#39;ll lead the development of high-craft, intuitive AI features that power Gamma&#39;s presentation product, shipping complex systems that millions of users rely on every day. As a Staff Engineer, you&#39;ll balance hands-on engineering with technical leadership, designing systems that handle real-time collaboration and AI at scale, elevating engineering quality and effectiveness across the team, and influencing technical strategy alongside engineering leadership.</p>
<p>You&#39;ll ship high-craft, intuitive AI features (including agents) that power Gamma&#39;s presentation product, own these systems end-to-end, ensuring reliability, performance, and scalability as we grow to hundreds of millions of users. You&#39;ll design and implement sophisticated AI workflows that integrate LLMs and image models seamlessly into real-time collaborative experiences, elevate engineering quality and effectiveness across the team through code reviews, mentorship, and establishing best practices, shape Gamma&#39;s technical future in AI and product, influencing architecture decisions and technical strategy, and build systems that balance security, usability, and performance at massive scale.</p>
<p>We have a strong in-office culture and work in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K - $310K plus benefits &amp; equity</Salaryrange>
      <Skills>CRDTs, LLMs, image models, WebSockets, streaming technologies, TypeScript, React</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a technology company that powers presentation products used by millions of users.</Employerdescription>
      <Employerwebsite>https://www.gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/e15ae459-f956-453f-91b7-08945a5af506</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e3fb92a9-622</externalid>
      <Title>Principal Software Engineer - AI Ads</Title>
      <Description><![CDATA[<p>Microsoft AI is looking for a Principal Software Engineer – AI Ads to shape the future of online advertising. You&#39;ll lead the design and development of large-scale shopping ads infrastructure that powers billions of products worldwide. This is a rare opportunity to work on cutting-edge AI, big data, and deep learning systems while collaborating with world-class scientists and engineers to deliver solutions at massive scale.</p>
<p>Online advertising is one of the fastest-growing businesses on the Internet today, with about $70 billion of the $600 billion advertising market already online. Search engines, chatbots, web publishers, major ad networks, and ad exchanges serve billions of ad impressions daily and generate terabytes of user event data. This rapid growth has created enormous opportunities as well as technical challenges that demand advanced computational intelligence.</p>
<p>Computational Advertising has emerged as a new interdisciplinary field that combines information retrieval, machine learning, large-scale distributed systems, data mining, statistics, operations research, and microeconomics to solve complex problems. At its core, the challenge is to select an optimized set of eligible ads for each user in order to maximize overall utility,capturing expected revenue, user experience, and advertisers’ return on investment.</p>
<p>Microsoft is innovating rapidly in this space to expand its market share by delivering a state-of-the-art online advertising platform and service. The Shopping Ads Infrastructure &amp; Algorithm team in Microsoft AI is seeking a Principal Software Engineer – AI Ads to lead research and development for the next generation of shopping ads infrastructure.</p>
<p>In this role, you will design, develop, optimize, and operate the universal product graph infrastructure and manage large-scale datasets that span billions of products in multiple languages worldwide. This product graph powers critical scenarios including Bing Search Ads, Copilot, AI Agents, Product Insights, Selection, Relevance, Modeling, and Personalization.</p>
<p>The team leverages deep learning, LLMs/SLMs, AI, NLP, information retrieval, big data, and streaming systems to build high-performance engineering solutions aligned with Microsoft’s Commerce Strategy. You will collaborate with leading scientists and engineers across Microsoft’s global R&amp;D organization to deliver impactful solutions at massive scale.</p>
<p>At Microsoft AI, you’ll have the opportunity to grow your career while tackling some of the hardest problems in machine learning, large-scale distributed systems, and computational advertising. You’ll collaborate with world-class researchers and engineers, influence product direction, and take on leadership opportunities that expand your technical and professional impact.</p>
<p>Our team builds infrastructure that powers billions of products across the globe, directly shaping the future of shopping and online advertising. By joining us, you’ll contribute to cutting-edge AI innovation at massive scale and help transform how users and advertisers connect.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, PyTorch, TensorFlow, Kafka, Flink, Spark Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-ai-ads-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>219effd6-71b</externalid>
      <Title>Artist &amp; Label Partnerships Manager</Title>
      <Description><![CDATA[<p>We have a great opportunity for an Artist &amp; Label Partnerships Manager to join our team in Sydney, Australia. As the face of Spotify to artists, managers, labels, and distributors across the globe, this role is designed for a self-driven individual with a deep passion for artists and the music industry. You will be responsible for nurturing and cultivating relationships with artists, managers, labels, and licensors across the region, executing impactful initiatives and global release campaigns that make Spotify the #1 partner for artist development.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Managing the relationship between Spotify and key artists, managers, labels, and licensors, handling everything from technical support and troubleshooting to priority release marketing partnerships</li>
<li>Empowering partners with opportunities, insights, and mentorship to help them achieve their goals and grow their audiences, educating artists and their teams on Spotify tools, best practices, and resources</li>
<li>Acting as Spotify&#39;s representative in the music industry externally, and as an internal champion for the artists and labels you work with, seeking opportunities to help them grow their audience and their business</li>
<li>Working cross-functionally with local and global teams to deliver on strategic goals and consumer campaigns</li>
<li>Using a developed understanding of music culture to connect artists with compatible fandoms via Spotify&#39;s suite of targeting tools and music programs</li>
<li>Attending industry events, conferences, and shows as needed</li>
</ul>
<p>To succeed in this role, you will need to have a minimum of 3+ years of proven music industry experience, preferably in an artist and partner-facing role. You should also have experience and strong knowledge across a broad spectrum of music and music cultures relevant to the AUNZ market, as well as a strong understanding of the AUNZ music industry and developed relationships within the local industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>music industry experience, artist and partner-facing role, cross-functional teamwork, strategic goal delivery, consumer campaign execution, music culture understanding, targeting tools and music programs, Spotify platform knowledge, music streaming service experience, artist development expertise, label management skills, marketing partnership development</Skills>
      <Category>Entertainment</Category>
      <Industry>Music</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that provides access to millions of songs, podcasts, and videos. It has a global presence with users in over 180 markets worldwide.</Employerdescription>
      <Employerwebsite>https://www.spotify.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/84552c98-09a5-49e2-8b71-68a02c956fb0</Applyto>
      <Location>Sydney</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3829d19f-c93</externalid>
      <Title>Machine Learning Engineer</Title>
      <Description><![CDATA[<p>Join Twilio&#39;s rapidly-growing AI &amp; Data Platform team as an Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads.</li>
<li>Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling.</li>
<li>Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets.</li>
<li>Monitor, test, and improve data quality, model performance, latency, and cost.</li>
<li>Partner with product, data science, and security teams to ship resilient, compliant services.</li>
<li>Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices.</li>
<li>Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7059734</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b56b23e1-a47</externalid>
      <Title>IT/AV Production Events Specialist II</Title>
      <Description><![CDATA[<p>At Pinterest, we&#39;re on a mission to bring everyone the inspiration to create a life they love. As a Production Specialist on our AV team, you will produce successful events and studio projects of all sizes. This will be a technical role with a focus on making flawless, high-end productions.</p>
<p>As a Production Specialist, you will:</p>
<p>Own every aspect of AV production as assigned: whether in the studio, or a live event. Execute on Studio Service project workloads including shooting, editing, and delivery of files Constantly communicate and collaborate with stakeholders on deliverables, timing, etc. Level-up existing gear and production methods Work with our AV team and the company at large to refine the events function Leverage Pinterest internal AI tools to create ‘how-to’ documentation for both user-facing and internal consumption</p>
<p>We&#39;re looking for someone with 4+ years of AV/Event production in a corporate environment, a Bachelor’s degree in broadcast, film, digital audio technology, or a related field, and solid understanding of soundboards, camera controllers, video switchers, streaming codecs, etc.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$77,441-$159,436 USD</Salaryrange>
      <Skills>AV production, Event production, Soundboards, Camera controllers, Video switchers, Streaming codecs, Editing, Coloring, Cinematography, Lighting techniques, Crestron equipment, Audio DSPs, Dante microphone systems, Ross products, InfoComm CTS certification, AI tools</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform where users share and discover visual content. It has millions of active users worldwide.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7838803</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>547d60f2-2ad</externalid>
      <Title>Staff Machine Learning Engineer</Title>
      <Description><![CDATA[<p>Join Twilio&#39;s rapidly-growing Trust Intelligence Platform team as an L4 Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>
<p>In this role, you&#39;ll:</p>
<p>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads. Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling. Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets. Monitor, test, and improve data quality, model performance, latency, and cost. Partner with product, data science, and security teams to ship resilient, compliant services. Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices. Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions. Embrace Twilio&#39;s &#39;We are Builders&#39; values by taking ownership of problems and driving them to completion.</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides software tools for developers to build, scale, and operate real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7061880</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c651c122-15f</externalid>
      <Title>Product Engineer (Staff/Principal)</Title>
      <Description><![CDATA[<p>Want to join us at a well-funded AI startup that&#39;s really going places? Build product that customers love, by AI, in production, for real revenue at 5D. Work in a low-ego, high-learning environment.</p>
<p>About Us: Fifth Dimension is bringing vibe working to document-heavy industries. Today we work with real estate businesses in the US, EU and APAC, automating complex tasks, extracting valuable insights from documents, and empowering professionals to focus on high-impact work. Our AI workspace transforms how large investment managers and developers handle leases, development documents, and investment decisions.</p>
<p>The Challenge It&#39;s 9 AM on a Monday. You&#39;re reviewing usage analytics from last week&#39;s release and notice a drop-off in a key workflow,you sketch out a hypothesis and a fix before standup. By noon, you&#39;ve shipped a prototype using Claude Code that rethinks how users navigate large document sets, and you&#39;re demoing it to a product manager and a customer success lead. By Wednesday, you&#39;re on a call with a major real estate firm, listening to how they actually use the platform day-to-day,and you spot an opportunity nobody had articulated yet. By Thursday, you&#39;ve shaped that insight into a spec, got buy-in, and started building. By Friday, you&#39;re celebrating with the team as a customer tells you: &quot;This is exactly what we needed.&quot;</p>
<p>About You You&#39;re an expert software engineer who thinks in terms of user outcomes, not just code. You don&#39;t just ship features,you understand the problem space deeply, make sharp product calls, and build solutions that customers didn&#39;t know they needed until they can&#39;t live without them. You&#39;ve honed your craft through years of practice across the full stack, from performant backends and well-designed APIs to responsive, polished frontends. You have strong product intuition,you know how to talk to customers, read between the lines of a feature request, and make pragmatic trade-offs between scope, speed, and quality. You&#39;re proficient with AI coding assistance tools like Claude Code, leveraging them to accelerate development and focus on higher-level product and architectural challenges. You understand that modern engineering means effectively collaborating with AI to maximise your productivity and creative potential. Details don&#39;t escape you. You take pride in the end-to-end user experience,from the API contract to the loading state to the edge case that only one customer hits. As someone who thrives in fast-paced environments, you adapt quickly and mentor other engineers while collaborating effectively with commercial teams and customers. You&#39;re passionate about your personal growth and see each complex problem as an opportunity to expand your capabilities. You actively seek challenges that push the boundaries of what&#39;s possible and value environments where you can both contribute your expertise and continue to evolve as an engineer.</p>
<p>Your Impact Reporting to our CTO Chen, you&#39;ll own and shape core product areas end-to-end,from discovery and design through to architecture, implementation, deployment, and customer adoption. Working closely with our skilled engineering team and commercial stakeholders, you&#39;ll combine deep technical ability with strong product sense to build capabilities that deliver tangible value to enterprise customers and drive the business forward. Day to Day, You Will Think like a Product Manager+: Own core product areas while balancing technical excellence with business impact,you don&#39;t wait for a spec, you help write it Own end-to-end delivery of complex features from discovery and definition through to production deployment and customer adoption Talk to customers regularly,understanding their workflows, pain points, and unarticulated needs to inform what you build Collaborate with product, design, and commercial teams to shape the roadmap and translate customer insight into robust technical solutions Leverage AI coding tools like Claude Code to accelerate development workflows Lead development of the systems that underpin our platform: Scalable APIs and backend services that power document processing, data extraction, and agentic workflows Reliable streaming and event-driven architectures for real-time user experiences Frontend experiences that make powerful AI capabilities intuitive and accessible Integrations with enterprise systems and third-party platforms Implement data privacy and security by design Champion engineering best practices: testing, observability, CI/CD, and infrastructure-as-code Mentor other engineers and help establish engineering and product-thinking best practices Invest in your own growth by taking on ambitious technical challenges and expanding your expertise Apply our engineering philosophy: intellectual honesty, effective time management, clear communication, and innovation</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI coding assistance tools, Claude Code, Scalable APIs, Backend services, Document processing, Data extraction, Agentic workflows, Reliable streaming, Event-driven architectures, Real-time user experiences, Frontend experiences, Integrations with enterprise systems, Third-party platforms, Data privacy, Security by design, Testing, Observability, CI/CD, Infrastructure-as-code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fifth Dimension</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.fifthdimensionai.com.png</Employerlogo>
      <Employerdescription>Fifth Dimension is a London and New York based startup that works with real estate businesses in the US, EU and APAC, automating complex tasks, extracting valuable insights from documents, and empowering professionals to focus on high-impact work.</Employerdescription>
      <Employerwebsite>https://careers.fifthdimensionai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.fifthdimensionai.com/jobs/7609267-product-engineer-staff-principal-new-york</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2c095439-13b</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>
<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>
<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>
<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>
<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>
<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>
<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>
<p>This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>
<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>
<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>
<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>
<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Preferred Qualifications:</p>
<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>
<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>
<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>
<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>
<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>
<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>
<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>
<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>
<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>
<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>
<p>Certain roles may be eligible for benefits and other compensation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is an American multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-41/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1ac19f03-f0e</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Help reinvent how advertising outcomes are measured in the AI era. You’ll join Microsoft Advertising in building the always-on measurement foundations that power performance, trust, and optimization at global scale. You’ll work on systems that transform raw conversion events into actionable attribution signals,so advertisers can understand what worked, what didn’t, and how to improve as customer journeys evolve.</p>
<p>As a Senior Software Engineer on the Conversion &amp; Attribution engineering team, you will strengthen the near real-time conversion pipeline,collecting and processing conversion events with low latency, and producing attribution signals used for reporting and campaign optimization. You’ll raise the bar on reliability, observability, and operational excellence for business-critical services, and help evolve our attribution platform toward a more unified, configurable architecture that supports cross-scenario needs.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, streaming/near real-time processing, data pipeline reliability patterns, ads measurement/conversion tracking/attribution systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-146/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3cee3c4e-b4c</externalid>
      <Title>Senior Systems Engineer, OIC (BizTech)</Title>
      <Description><![CDATA[<p>We are seeking a highly motivated Senior Systems Engineer to join our Business Technology organization. As a Senior Systems Engineer, you will be responsible for designing, building, and maintaining Oracle integrations and reports, as well as supporting application infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architect, build and manage integrations and micro services in Oracle Integration Cloud.</li>
<li>Design, code and execute on projects, continuously improving systems and operations.</li>
<li>Document the Designs and Runbooks and Participate in the full Software Development Life Cycle (SDLC).</li>
<li>Provide guidance and share knowledge with other members in the team.</li>
<li>Support the Application infrastructure including OCI and more.</li>
<li>Build and maintain BIP reports on Oracle.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or Engineering.</li>
<li>7+ years of experience in programming languages like Java, Python, or TypeScript.</li>
<li>Experience in using different ERP modules and cloud solutions including Financial, Procurement, Planning, and Analytics.</li>
<li>Experience in securing data; understands PGP, SSH, OAuth, HTTPS, SFTP.</li>
<li>Full life cycle implementation experience from requirements gathering/analysis to Go-Live and Post production support.</li>
<li>Expert in GitHub and CI/CD.</li>
<li>Good understanding of messaging infrastructure, data streaming and storage solutions.</li>
<li>Experience with Relational and NoSQL databases including ability to write, review and suggest PL/SQL code optimization.</li>
<li>Good to have experience in OIC, SQL, PL/SQL and Infrastructure.</li>
<li>Nice to have exposure in automations and end-to-end integrations using Oracle ERP Cloud, Oracle PaaS technologies like OIC, VBCS.</li>
<li>Experience in publishing and consuming web services, and their governance and administration.</li>
<li>Nice to have experience on customization related to Redwood pages in Oracle ERP cloud.</li>
<li>Nice to have experience on Oracle BIP, BICC extracts and OTBI reports and its customizations.</li>
<li>Should be able to enhance the BIP query for optimum performance.</li>
<li>Good to have understanding of Oracle Cloud Infrastructure (OCI) and PaaS architecture with working experience on Object storage, Compute instances, ATP and IAM/IDCS.</li>
<li>Ability to manage multiple projects simultaneously.</li>
<li>The candidate should demonstrate a willingness to learn new technologies and possess skills in designing MCP systems or AI solutions.</li>
<li>Eager to take responsibility, accountability and ownership of systems and processes.</li>
<li>Excited to learn and build on more tools the team uses.</li>
<li>Good understanding in Infrastructure as Code using Terraform and Configuration Management using Chef.</li>
<li>Possesses strong verbal and written communication skills.</li>
<li>Exposure to Oracle ERP cloud AI Agents and building ERP MCP servers would be an added advantage.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, TypeScript, Oracle Integration Cloud, ERP modules, Cloud solutions, Financial, Procurement, Planning, Analytics, PGP, SSH, OAuth, HTTPS, SFTP, GitHub, CI/CD, Messaging infrastructure, Data streaming, Storage solutions, Relational databases, NoSQL databases, PL/SQL, OIC, SQL, Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces for accommodations.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7556182</Applyto>
      <Location>Remote - Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>460d00aa-b48</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>About the role</p>
<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, Streaming pipelines, APIs, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, Voice activity detection, Echo cancellation, Jitter buffering, Audio DSP, GPU-based serving infrastructure, Telephony, Live streaming, Video conferencing, Voice assistant platforms, Mobile audio pipelines on iOS, Android, Working alongside ML researchers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cee7bd81-c81</externalid>
      <Title>UI Software Engineer, Claude.ai Consumer Product</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented engineer to join the team that builds the consumer web app , the interfaces, interactions, and moments that turn Claude from a capable model into a product people genuinely enjoy using.</p>
<p>This is a product engineering role first and foremost. You&#39;ll work closely with designers and product managers to bring features from concept to shipped experience, obsessing over the details that make the difference: how something feels when you first land on it, how smoothly a new interaction flows, how an interface holds up under real-world use.</p>
<p>The pace is fast, the product is evolving quickly, and the opportunity to have a visible, direct impact on how millions of people use AI is real. If you love building consumer products and care about getting the details right, this could be a great fit.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and ship user-facing features for claude.ai&#39;s web experience, working closely with designers to bring detailed, polished interactions to life</li>
</ul>
<ul>
<li>Translate design intent into high-quality implementations , paying close attention to accessibility and the small details that add up to a great product feel</li>
</ul>
<ul>
<li>Build responsive applications that work well across devices and screen sizes, and actively care about the performance characteristics that shape how the product feels to use , latency, responsiveness, and reliability are first-class concerns, not afterthoughts</li>
</ul>
<ul>
<li>Collaborate tightly with product managers to understand user needs, shape feature scope, and make informed tradeoffs as you build</li>
</ul>
<ul>
<li>Iterate quickly based on user feedback and internal testing, improving the experience on a continuous basis</li>
</ul>
<ul>
<li>Work with the UI Platform team to consume shared components and tooling effectively, and flag gaps or pain points that would help the broader team move faster</li>
</ul>
<ul>
<li>Help maintain a high bar for code quality and consistency within the consumer product codebase</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience building consumer-facing web products, with a strong emphasis on UI quality and user experience</li>
</ul>
<ul>
<li>Are proficient in React, Next.js, and TypeScript, and have experience with Node.js on the backend side of the stack</li>
</ul>
<ul>
<li>Genuinely care about the user experience , not just how something looks, but how fast it loads, how reliably it works, and how it feels across devices. You think about latency and responsiveness the same way you think about design</li>
</ul>
<ul>
<li>Collaborate well with designers and product managers, and enjoy the iterative process of turning a design into something that works beautifully in the browser</li>
</ul>
<ul>
<li>Are comfortable working in a fast-moving environment where priorities shift and shipping quickly matters</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Accessibility best practices and building inclusive user interfaces</li>
</ul>
<ul>
<li>Performance optimization for consumer web apps , profiling, reducing bundle size, improving rendering performance, and understanding where latency comes from end-to-end</li>
</ul>
<ul>
<li>Designing and implementing responsive layouts that work well across screen sizes and devices</li>
</ul>
<ul>
<li>Working on products with real-time or streaming interactions (chat interfaces, live updates, etc.)</li>
</ul>
<ul>
<li>User research or usability testing, or a track record of incorporating user feedback into product decisions</li>
</ul>
<ul>
<li>Working on AI/ML products or in fast-moving consumer product environments</li>
</ul>
<p>Candidates need not have:</p>
<ul>
<li>100% of the skills needed to perform the job</li>
</ul>
<ul>
<li>Formal certifications or education credentials</li>
</ul>
<p>Annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>React, Next.js, TypeScript, Node.js, Accessibility best practices, Performance optimization, Responsive layout design, Real-time or streaming interactions, User research or usability testing, AI/ML products, Fast-moving consumer product environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5026097008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad5c420d-b2d</externalid>
      <Title>Senior Solutions Architect - Lakebase</Title>
      <Description><![CDATA[<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>
<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>
<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>
<p>The Lakebase product-line requires the following core technical competencies:</p>
<ul>
<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>
<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>
<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>
<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>
</ul>
<p>Impact</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>
<p>Competencies &amp; Responsibilities</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>
<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407181002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c458163f-94a</externalid>
      <Title>Member of Technical Staff – X Core Product</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>As a Software Engineer for X Product/Platform, you&#39;ll join a thirty-person team responsible for building and scaling X. You will be tasked with independently owning significant parts of the system end-to-end: from intuitive user interfaces to robust backend services, data infrastructure, and deep AI integrations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop backend services, APIs, and data models to support high-volume, multi-user environments.</li>
<li>Work with iOS, Android &amp; Web client engineers to ship products.</li>
<li>Design robust infrastructure and microservices for payments, transactions, growth, monetization, and engagement across platforms.</li>
<li>Build and maintain fullstack features, including user dashboards, personalized experiences, content delivery, interactive tools, assessments, and real-time analytics.</li>
<li>Lead architecture, scalability, and reliability decisions for high-concurrency, low-latency systems.</li>
<li>Uphold engineering excellence via testing, monitoring, deployment, and secure data handling.</li>
</ul>
<p><strong>Basic Qualifications:</strong></p>
<ul>
<li>Proficiency in distributed systems for high-scale, low-latency environments; languages like Rust, Go, Python &amp; Java, and high volume streaming systems.</li>
<li>2+ years of experience working on large scale consumer applications.</li>
</ul>
<p><strong>Compensation and Benefits:</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, Rust, Go, Python, Java, high volume streaming systems, rapid prototyping, user-centric design, AI solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5063929007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e355a4a3-c92</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modelling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p><strong>Automation &amp; Tooling</strong></p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p><strong>Operations &amp; Incident Response</strong></p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p><strong>Preferred/Bonus Qualifications</strong></p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437947</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e28478b-c37</externalid>
      <Title>Research, Audio Expertise</Title>
      <Description><![CDATA[<p>We&#39;re seeking a researcher to advance the frontier of audio capabilities. You&#39;ll explore how audio models enable more natural and efficient communication/collaboration, preserving more information and capturing user intent.</p>
<p>This is a highly collaborative role. You&#39;ll work closely across pre-training, post-training, and product with world-class researchers, infrastructure engineers, and designers.</p>
<p>As a researcher in this role, you&#39;ll be expected to:</p>
<ul>
<li>Own research projects on audio training, low-latency inference, and conversational responsiveness.</li>
<li>Design and train large-scale models that natively support audio input and output.</li>
<li>Investigate scaling behaviour such as how data, model size, and compute affect capability and efficiency.</li>
<li>Build and maintain audio data pipelines, including preprocessing, filtering, segmentation, and alignment for training and evaluation.</li>
<li>Collaborate with data and infrastructure teams to scale audio training efficiently across distributed systems.</li>
<li>Publish and present research that moves the entire community forward.</li>
</ul>
<p>Share code, datasets, and insights that accelerate progress across industry and academia.</p>
<p>This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports.</p>
<p>It&#39;s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, JAX, Machine Learning, Deep Learning, Distributed Compute Environments, Probability, Statistics, Real-time Inference, Streaming Architectures, Optimization for Low Latency, Large-Scale Audio or Multimodal Models, Speech, Audio, Voice, or Similar Areas</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on advancing collaborative general intelligence through AI products and open-source projects.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5002212008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477d343e-e37</externalid>
      <Title>Customer Success Architect</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>
<p>About the Customer Success Team:</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role:</p>
<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>
<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>
<p>Responsibilities:</p>
<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>
<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>
<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>
<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>
<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>
<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>
<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>
<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>
<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>
<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>
<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>
<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>
<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>
<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>
<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>
<p>Excellent written, analytical, and communication skills</p>
<p>Strong process and/or project delivery discipline</p>
<p>Eager to learn new technologies and adapt to evolving customer needs</p>
<p>We&#39;d Be Extra Excited For Someone Who Has</p>
<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Familiar with analytics best practices across business segments and verticals</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leading provider of digital analytics software, serving over 29,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7506821</Applyto>
      <Location>Bengaluru, India (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b3cf0ff9-4c6</externalid>
      <Title>Support Engineer II</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>
<p>Visit mixpanel.com to learn more.</p>
<p>About The Support Team</p>
<p>Mixpanel Support is a team of talented problem-solvers from diverse backgrounds. We care deeply about helping our customers be successful and enabling them to get value from their data.</p>
<p>We are located all over the world in San Francisco, Barcelona, London, and Singapore...</p>
<p>About The Role</p>
<p>The right candidate is an avid learner, an advocate for customers, and a collaborative teammate. The main responsibility of a Support Engineer is to help users solve technical challenges and use Mixpanel to make impactful product decisions.</p>
<p>We’ve had team members focus on developing their technical skills to join the product and engineering teams, hone their customer-facing skills to become customer success managers or sales engineers, and take on leadership roles in the Support organization.</p>
<p>Responsibilities</p>
<p>The core responsibility of a Support Engineer is to support our customers at every turn in the Mixpanel journey by providing answers to product questions, sharing best practices, and debugging technical issues.</p>
<p>You&#39;ll also develop your technical skills, collaborate with our Product team to improve our product, learn product analytics, and mentor new team members.</p>
<p>Become a Mixpanel product expert - you will help users understand our reports and features, help them use our APIs and SDKs, share best practices, and resolve account issues</p>
<p>Respond to customer inquiries via Zendesk email, chat, Slack, and phone calls</p>
<p>Investigate and document bugs and feature requests to share with our Product and Engineering teams</p>
<p>Provide feedback regarding internal support processes, product functionality, and customer education resources to improve the customer experience</p>
<p>Shape the product by regularly working closely with PM’s, engineers, and designers to incorporate customer learnings into change</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>Experience providing customer facing SAAS support (in customer support, professional services, technical account management or similar)</p>
<p>Ability to communicate technical concepts effectively in a clear, friendly writing style</p>
<p>Excellent problem-solving and analytical skills</p>
<p>Programming experience, understanding of web &amp; mobile technologies, and interacting with APIs</p>
<p>Experience with debugging and collaborating with engineering to resolve complex technical issues, especially with JavaScript, Python, or mobile technologies</p>
<p>Ability to be resourceful and resilient when faced with ambiguity and new challenges</p>
<p>Dedication to developing expertise in a complex and constantly evolving product</p>
<p>Interest and aptitude to develop technical skills and learn new technologies</p>
<p>Experience providing SLA based support and/or dedicated support to strategic customers</p>
<p>Speak Hebrew and fluent English</p>
<p>Bonus Points</p>
<p>Experience with Mixpanel or other analytics tools</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>
<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>customer facing SAAS support, technical concepts, problem-solving, programming experience, web &amp; mobile technologies, APIs, debugging, collaboration, SLA based support, dedicated support, Hebrew, English, Mixpanel, analytics tools, databases, cloud data warehouses, product analytics implementation methods, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a digital analytics platform that helps teams accelerate adoption, improve retention, and ship with confidence. It has over 29,000 customers, including Workday, Pinterest, LG, and Rakuten Viber.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7650541</Applyto>
      <Location>Tel Aviv, Israel (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53024247-9d6</externalid>
      <Title>Senior Solutions Architect - Lakewatch</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect to join our Lakewatch team in London. As a Senior Solutions Architect, you will provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment.</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory, driving Lakewatch adoption from initial data offload through full SIEM augmentation or replacement.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect building technical credibility with CISOs, security architects, SOC leadership, and security analysts to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops, POC execution, and developing customer-facing collateral that increases technical knowledge and demonstrates the value of an open agentic SIEM architecture.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical security environments.</p>
<p>Establish and refine the sales qualification and POC intake process, ensuring well-scoped engagements that maximize customer success and minimize friction for R&amp;D.</p>
<p>The ideal candidate will have 5+ years of experience in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level security strategy and product adoption.</p>
<p>Experience with design and implementation of data and AI applications in cybersecurity, including anomaly detection, behavioral analytics, and agentic AI workflows for triage and investigation.</p>
<p>Proficient in programming, debugging, and problem-solving using SQL and Python and with AI tools.</p>
<p>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes in cybersecurity.</p>
<p>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP), with an understanding of cloud-native security logging and monitoring.</p>
<p>Deep experience in security operations, with broad familiarity across one or more of the following: data engineering, data warehousing, AI/ML for security, data governance, and streaming.</p>
<p>Undergraduate degree (or higher) in a technical field such as Computer Science, Cybersecurity, Applied Mathematics, Engineering or similar.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cybersecurity engineering, security operations, security architecture, design and implementation of data and AI applications, anomaly detection, behavioral analytics, agentic AI workflows, SQL, Python, AI tools, cloud-native security logging and monitoring, data engineering, data warehousing, AI/ML for security, data governance, streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that unifies and democratizes data, analytics, and AI for over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8493140002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d9b7d5ae-6bf</externalid>
      <Title>Software Engineer, Distributed Systems</Title>
      <Description><![CDATA[<p>We&#39;re growing our team of passionate creatives and builders on a mission to make design accessible to all. Our platform helps teams bring ideas to life,whether you&#39;re brainstorming, creating a prototype, translating designs into code, or iterating with AI. From idea to product, Figma empowers teams to streamline workflows, move faster, and work together in real time from anywhere in the world.</p>
<p>As a Software Engineer on our Infrastructure team, you’ll help design, build, and operate the systems that power our real-time collaborative design tools used by millions of people worldwide. We’re scaling fast, and we’re looking for experienced distributed systems engineers across a variety of teams. Whether you’re passionate about storage, compute orchestration, developer tooling, networking, or real-time data systems, this role offers an opportunity to shape the technical foundation of one of the most beloved design platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain scalable and reliable infrastructure systems that support product innovation and user collaboration at scale.</li>
</ul>
<ul>
<li>Architect and evolve distributed systems including storage platforms, streaming infrastructure, and compute orchestration.</li>
</ul>
<ul>
<li>Improve developer experience by building internal platforms, CI/CD systems, build tools, and APIs.</li>
</ul>
<ul>
<li>Collaborate across product and infrastructure teams to design secure, maintainable, and performant systems.</li>
</ul>
<ul>
<li>Participate in shaping platform strategy, roadmaps, and engineering best practices across the organization.</li>
</ul>
<ul>
<li>Debug and resolve complex production issues that span services and layers of the stack.</li>
</ul>
<ul>
<li>Mentor engineers and foster a culture of collaboration, inclusivity, and technical excellence.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of Software Engineering experience, specifically in backend or infrastructure engineering.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems concepts such as sharding, replication, consistency, and eventual convergence.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (AWS, GCP, or Azure), infrastructure-as-code, and container orchestration.</li>
</ul>
<ul>
<li>Proficiency in languages such as Go, TypeScript, Python, Rust, or Ruby.</li>
</ul>
<ul>
<li>Strong system design skills and a track record of architecting resilient production systems.</li>
</ul>
<ul>
<li>Excellent communication skills, with experience collaborating across teams and mentoring others.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience scaling storage platforms (e.g., Postgres, Redis, S3, DynamoDB) or operating streaming systems like Kafka.</li>
</ul>
<ul>
<li>Background in traffic management, DDoS mitigation, or service mesh technologies (e.g., Envoy, Istio).</li>
</ul>
<ul>
<li>A history of developing complex, real-time distributed systems at scale.</li>
</ul>
<ul>
<li>A passion for building developer productivity tools, including development environments, CI/CD pipelines, and build systems.</li>
</ul>
<ul>
<li>Experience with evolving large-scale, shared developer platforms to improve reliability and developer velocity.</li>
</ul>
<ul>
<li>Strong problem-solving skills and a bias for action,especially when tackling high-impact, gritty challenges.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$153,000-$376,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native environments, infrastructure-as-code, container orchestration, Go, TypeScript, Python, Rust, Ruby, system design, resilient production systems, storage platforms, streaming infrastructure, compute orchestration, developer tooling, networking, real-time data systems, traffic management, DDoS mitigation, service mesh technologies, complex distributed systems, developer productivity tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design platform that helps teams bring ideas to life through real-time collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5552549004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>879783fa-e08</externalid>
      <Title>Sr. Product Manager, Data Engineering</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Data Engineering is foundational and among the largest scale workloads on the Databrick Data Intelligence Platform. We are reinventing Data Engineering with Lakeflow - a unified product and experience for simple data ingestion, declarative data transformation, and real-time streaming.</p>
<p>In this role, you will lead product management for a core Lakeflow product area. You will own and drive all aspects of product management including vision, strategy, roadmap, execution, and go-to-marketing. In addition, you will partner closely with various Databricks product teams to enable Data Engineering for the overall Databrick product portfolio including data science, data warehousing, business intelligence, and machine learning products.</p>
<p>The impact you will have includes leading product management for one of the fastest growing products and businesses at Databricks, making company-wide impact by driving Data Engineering across the Databrick product portfolio, developing and deepening understanding of and expertise in Data Engineering, defining, shaping, and driving the future of data processing, data applications, and data pipelines, and owning the full life cycle of product development from ideation to requirements, development, pricing, launch, and go-to-market.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$115,400-$204,200 USD</Salaryrange>
      <Skills>product management, data engineering, Lakeflow, data ingestion, declarative data transformation, real-time streaming, product vision, product strategy, roadmap development, execution, go-to-marketing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6322654002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bde3fd8-78f</externalid>
      <Title>Principal VM Engineer – Workers Runtime Team</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>
<p><strong>Available Locations:</strong></p>
<p>Remote in US and Europe</p>
<p><strong>Principal VM Engineer – Workers Runtime Team</strong></p>
<p>About the Department</p>
<p>The Emerging Technologies &amp; Incubation (ETI) team at Cloudflare builds and launches bold, new products that push the boundaries of what&#39;s possible on the internet. By leveraging Cloudflare&#39;s massive network and edge computing capabilities, we solve complex problems at a scale few others can achieve.</p>
<p>About the Team</p>
<p>The Workers Runtime team is responsible for the execution environment that runs customer code at the edge. We focus on performance, security, and scalability, enhancing JavaScript APIs, WebAssembly support, and system optimizations to prepare for the next 10x scale increase. Our runtime operates in a resource-constrained, highly secure environment, requiring careful management of memory, CPU, and I/O.</p>
<p>What You&#39;ll Do</p>
<p>We are looking for a VM Engineer to help improve and embed the V8 virtual machine in our runtime. You&#39;ll work on low-level optimizations, performance enhancements, garbage collection, and language support to ensure our platform remains cutting-edge. This role is ideal for engineers who love tackling high-performance, low-latency challenges in distributed environments.</p>
<p>Key Responsibilities</p>
<ul>
<li>Optimize and embed the V8 VM within Cloudflare&#39;s Workers Runtime.</li>
<li>Improve JavaScript execution performance and WebAssembly integration.</li>
<li>Debug, optimize, and enhance low-latency, real-time environments.</li>
<li>Ensure the reliability and efficiency of large-scale, Linux-based distributed systems.</li>
<li>Collaborate with engineers across runtime, security, and networking teams to push the boundaries of edge computing.</li>
</ul>
<p>What We&#39;re Looking For</p>
<ul>
<li>6+ years of professional experience with C++.</li>
<li>4+ years of hands-on VM/compiler experience, ideally with V8.</li>
<li>Strong knowledge of computer science fundamentals, including data structures, algorithms, and system architecture.</li>
<li>Experience with low-latency environments (e.g., game streaming, trading systems, high-performance computing).</li>
<li>Operational mindset – you build scalable, production-ready solutions.</li>
<li>Deep understanding of web technologies (HTTP, JavaScript, WASM).</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience working with Rust in high-performance distributed systems.</li>
<li>Familiarity with serverless platforms and cloud computing.</li>
<li>Deep knowledge of JS engine internals (V8, SpiderMonkey, JavaScriptCore).</li>
<li>Experience with standalone WebAssembly runtimes (Wasmtime, Wasmer, Lucet).</li>
<li>Strong expertise in Linux/UNIX systems, kernels, and networking.</li>
<li>Contributions to large open-source projects.</li>
</ul>
<p>This is an exciting opportunity to work on cutting-edge compiler and runtime technologies at an unmatched scale. If you&#39;re passionate about high-performance computing, distributed systems, and compilers, we’d love to hear from you!</p>
<p><strong>## ##</strong></p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, VM/compiler experience, V8, computer science fundamentals, data structures, algorithms, system architecture, low-latency environments, game streaming, trading systems, high-performance computing, web technologies, HTTP, JavaScript, WASM, Rust, serverless platforms, cloud computing, JS engine internals, WebAssembly runtimes, Linux/UNIX systems, kernels, networking, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that runs one of the world&apos;s largest networks, powering millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6718312</Applyto>
      <Location>Distributed</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce9f3d34-c8a</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, API design, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, voice activity detection, echo cancellation, jitter buffering, audio DSP, GPU-based serving infrastructure, telephony, live streaming, video conferencing, voice assistant platforms, mobile audio pipelines on iOS, Android, pair programming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c76014f6-557</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent Runtime)</Title>
      <Description><![CDATA[<p>Build real-time AI agent infrastructure: Design and operate the stateful, low-latency runtime that powers voice and chat AI agents , from LLM streaming and conversation state management to graceful recovery and multi-channel support.</p>
<p>Solve distributed systems problems: Own session management across scaled-out workers , including affinity, checkpointing, crash recovery, and consistency under concurrent access.</p>
<p>Build a function execution platform: Own a serverless-style runtime where customers deploy custom logic , build orchestration, container lifecycle, autoscaling, and versioned rollouts.</p>
<p>Own developer experience and test infrastructure: Build CLI tools, local development environments, and test execution frameworks that let engineers iterate quickly and ship with confidence.</p>
<p>Raise the bar on production quality: Drive observability, incident response, and engineering best practices across the team.</p>
<p>We&#39;re looking for a senior software engineer with 5+ years of experience in infrastructure, platform, or systems work. You should have strong Python and Go skills, as well as a deep understanding of distributed systems, consistency, fault tolerance, state management, and concurrency.</p>
<p>Experience with Kubernetes and cloud-native infrastructure is also required. You should be able to build developer-facing tooling, such as CLIs, SDKs, local dev environments, or internal platforms.</p>
<p>A high bar for code quality, thorough testing, thoughtful code review, and sustainable engineering practices are essential. You should be comfortable operating what you build, on-call, incident response, and production ownership.</p>
<p>AI-native workflow is a must, and you should actively use LLMs and AI-assisted tools in your daily development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Distributed systems, Kubernetes, Cloud-native infrastructure, Developer-facing tooling, Code quality, Testing, Code review, Sustainable engineering practices, LLMs, AI-assisted tools, Real-time voice or streaming media systems, Hands-on with LLM integration, Serverless or function-as-a-service platforms, Workflow engines, Infrastructure-as-code and GitOps workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4675293008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e8c62558-752</externalid>
      <Title>Software Engineer, Android</Title>
      <Description><![CDATA[<p>We&#39;re looking for seasoned Android engineers to join our Claude mobile team and help build apps that harness the transformative power of advanced language models. Our mission is to unlock the potential of advanced AI through elegant, user-friendly mobile applications that put unprecedented capabilities at users&#39; fingertips. You will work with a talented team of engineers, researchers, design and Product teams to design and implement key components of our products.</p>
<p>Responsibilities:</p>
<ul>
<li>Architect and implement cutting-edge Android applications</li>
<li>Develop novel solutions leveraging AI technologies</li>
<li>Optimize performance at all levels of the mobile stack</li>
<li>Champion best practices in mobile development</li>
<li>Obsessive attention to detail and app experience</li>
<li>Contribute to backend systems as needed</li>
</ul>
<p>You might be a good fit if you have:</p>
<ul>
<li>7+ years of Android development experience and proficiency with latest mobile platform capabilities/intricacies, including</li>
<li>Expertise in Kotlin, Jetpack Compose, Android SDK and the Android ecosystem</li>
<li>Practical experience with full-stack development and comfort working with backend technologies</li>
<li>0 to 1 experience building successful products in early stage environments</li>
<li>A proven track record of shipping impactful, high-adoption mobile applications</li>
<li>Experience building applications that utilize modern ML/AI technology</li>
<li>Excellent communication and mentorship skills</li>
<li>Thrive in a fast-paced, collaborative environment and and enjoy working closely with cross functional partners and teammates</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>3D graphics, visual effects, and audio and video streaming on mobile</li>
<li>A vision for the future of human-machine interaction and a drive to make that vision a reality</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Android development, Kotlin, Jetpack Compose, Android SDK, full-stack development, backend technologies, 3D graphics, visual effects, audio and video streaming, ML/AI technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4899511008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c2b1fd1-6ca</externalid>
      <Title>Staff Software Engineer- AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer on the AI Workload Orchestration Platform team, you will act as a technical leader for CoreWeave&#39;s Kubernetes-native orchestration strategy for AI workloads.</p>
<p>You will define and evolve the architecture for how AI workloads are admitted, scheduled, and governed across large GPU clusters using frameworks such as Kueue, Volcano, and Ray. This platform serves as a strategic complement to SUNK (Slurm on Kubernetes) and underpins both training and inference workloads across the CoreWeave cloud.</p>
<p>This role requires strong systems thinking, cross-team influence, and a long-term view of platform scalability, reliability, and developer experience.</p>
<p>You will own the technical vision and architecture for major portions of the AI Workload Orchestration Platform, design scalable, reliable orchestration primitives for AI workloads across multiple schedulers and runtimes, lead cross-team architecture reviews and drive alignment across infrastructure, CKS, and managed inference teams, define platform standards for reliability, observability, capacity management, and operational excellence, identify and resolve systemic performance, scalability, and fairness issues across large GPU clusters, mentor senior engineers and grow technical leadership within the organization, and represent the platform in technical reviews and influence broader CoreWeave platform strategy.</p>
<p>You will be responsible for leading technical initiatives across teams without direct authority, owning mission-critical systems at scale, and having a strong operational mindset. You will also have the opportunity to mentor senior engineers and grow technical leadership within the organization.</p>
<p>If you&#39;re a strong systems thinker with a passion for AI and cloud computing, this could be the perfect opportunity for you to join a team of innovators and help shape the future of AI workload orchestration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, Kubernetes, Distributed systems, Cloud platforms, Kueue, Volcano, Ray, AI infrastructure, ML platforms, HPC, Large-scale batch and streaming systems, Scheduling concepts, Fairness, Pre-emption, Quota management, Multi-tenant isolation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647586006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0282a9-9ee</externalid>
      <Title>Staff Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>
<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>
<li>Develop and refine monitoring and alerting to enhance system reliability.</li>
<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>
<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>
<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>
<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>
<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>
<li>Proven track record of leading incident management and post-mortem analysis.</li>
<li>Excellent problem-solving, analytical, and communication skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience running and scaling observability tools as a cloud provider.</li>
<li>Experience administering large-scale kubernetes clusters.</li>
<li>Deep understanding of data-streaming systems.</li>
</ul>
<p>The base salary range for this role is $188,000 to $250,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider for AI, founded in 2017 and listed on Nasdaq since March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577361006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9be280f4-cbc</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>
<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>
<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>
<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dff28c0f-d33</externalid>
      <Title>Senior Software Engineer, Workers Runtime</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p><strong>Available Locations:</strong></p>
<p>Austin, TX | Lisbon, Portugal | London, UK</p>
<p><strong>About the Department</strong></p>
<p>Emerging Technologies &amp; Incubation (ETI) is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers.</p>
<p>Cloudflare’s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>
<p><strong>About the Team</strong></p>
<p>The Workers Runtime team delivers features and improvements to our Runtime which actually executes customer code at the edge. We care deeply about increasing performance, improving JS API surface area and compiled language support through WebAssembly, and optimizing to meet the next 10x increase in scale.</p>
<p>The Runtime is a hostile environment - System resources such as memory, cpu, I/O, etc need to be managed extremely carefully and security must be foundational in everything we do.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Systems Engineer to join our team. You will work with a team of passionate, talented engineers that are building innovative products that bring security and speed to millions of internet users each day.</p>
<p>You will play an active part in shaping product features based on what’s technically possible. You will make sure our company hits our ambitious goals from an engineering standpoint.</p>
<p>You bring a passion for meeting business needs while building technically innovative solutions, and excel at shifting between the two,understanding how big-picture goals inform technical details, and vice-versa.</p>
<p>You thrive in a fast-paced iterative engineering environment.</p>
<p><strong>Examples of desirable skills, knowledge and experience</strong></p>
<ul>
<li>At least 2 years of recent professional experience with C++ or Rust.</li>
<li>Solid understanding of computer science fundamentals including data structures, algorithms, and object-oriented or functional design.</li>
<li>An operational mindset - we don&#39;t just write code, we also own it in production.</li>
<li>Deep understanding of the web and technologies such as web browsers, HTTP, JavaScript and WebAssembly.</li>
<li>Experience working in low-latency real time environments such as game streaming, game engine architecture, high frequency trading, payment systems.</li>
<li>Experience debugging, optimizing and identifying failure modes in a large-scale Linux-based distributed system.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience building high performance distributed systems in Rust.</li>
<li>Experience working with cloud platforms, especially server-less platforms.</li>
<li>Experience with the internals of JS engines such as V8, SpiderMonkey, or JavaScriptCore.</li>
<li>Experience with standalone WebAssembly runtimes such as Wasmtime, Wasmer, Lucet, etc.</li>
<li>Deep Linux/UNIX systems, kernel, or networking knowledge.</li>
<li>Contributions to large open source projects</li>
</ul>
<p><strong>What Makes Cloudflare Special?</strong></p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Rust, computer science fundamentals, data structures, algorithms, object-oriented or functional design, web browsers, HTTP, JavaScript, WebAssembly, low-latency real time environments, game streaming, game engine architecture, high frequency trading, payment systems, Linux-based distributed system, experience building high performance distributed systems in Rust, experience working with cloud platforms, experience with the internals of JS engines, experience with standalone WebAssembly runtimes, deep Linux/UNIX systems, kernel, or networking knowledge, contributions to large open source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6578726</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5918b6e2-8d0</externalid>
      <Title>Senior Audio Visual Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented Senior Audio Visual Engineer to join our team. As a key member of our AV team, you will be responsible for designing, implementing, and maintaining cutting-edge AV systems that enhance our collaboration, presentation, and event spaces. Your expertise will help create immersive and engaging environments that inspire our teams and leave a lasting impression on our guests.</p>
<p>Responsibilities:</p>
<ul>
<li>Oversee the design, installation, configuration, and integration of AV systems.</li>
<li>Perform hands-on setup, operation, troubleshooting, and maintenance of AV equipment for live events, meetings, and presentations.</li>
<li>Plan and support events ranging from 300+ person internal/external events to small team meetings.</li>
<li>Act as the primary escalation point for complex technical issues, ensuring minimal downtime and quick resolution.</li>
<li>Collaborate with clients, event planners, IT teams, and vendors to assess needs, plan layouts, and execute AV setups for small to large-scale events.</li>
<li>Develop and maintain documentation, including maintenance schedules, troubleshooting guides, inventory tracking, and system diagrams.</li>
<li>Ensure compliance with safety standards, company policies, and industry best practices during installations and operations.</li>
<li>Manage project timelines, budgets (for small projects), and resource allocation while maintaining profitability and quality.</li>
<li>Stay current with emerging AV technologies and recommend upgrades or improvements</li>
<li>Provide exceptional customer service, including on-site support and post-event debriefs</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Associate&#39;s or Bachelor&#39;s degree in Audio Visual Technology, Electronics, Information Technology, or a related field preferred (or equivalent experience).</li>
<li>5+ years in audio-visual technical roles, with at least 2 years in a lead, senior, or supervisory position. Experience in corporate, educational, or event environments is a plus.</li>
<li>CTS (Certified Technology Specialist) from AVIXA strongly preferred; additional certifications like Crestron, Extron, Dante, or Biamp are highly desirable.</li>
<li>Proficiency with AV control systems (Crestron, AMX, Extron), video conferencing (Zoom, Microsoft Teams, Cisco Webex), digital signal processing (DSP), and networking basics.</li>
<li>Strong knowledge of audio/video signal flow, cabling, rigging, lighting, and projection systems.</li>
<li>Experience with live event production, including mixing boards, cameras, and streaming.</li>
<li>Excellent leadership, communication, and problem-solving abilities; ability to work under pressure and manage multiple priorities. Strong customer service orientation and team collaboration.</li>
<li>Physical Requirements: Ability to lift heavy equipment (up to 50 lbs), climb ladders, and work in varied environments (including evenings/weekends for events). Valid driver&#39;s license may be required</li>
</ul>
<p>Additional Requirements:</p>
<ul>
<li>Must be physically fit enough to regularly lift up to 30 lbs. for duties such as delivering computers, unpacking and rack-mounting equipment, etc.</li>
<li>Must be willing to work extended hours and weekends as needed.</li>
<li>Must be willing to travel</li>
</ul>
<p>Compensation and Benefits: $115,000 - $145,000 Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p>ITAR Requirements: To conform to U.S. Government export regulations, applicant must be a (i) U.S. citizen or national, (ii) U.S. lawful, permanent resident (aka green card holder), (iii) Refugee under 8 U.S.C. § 1157, or (iv) Asylee under 8 U.S.C. § 1158, or be eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$115,000 - $145,000</Salaryrange>
      <Skills>AV control systems, video conferencing, digital signal processing, networking basics, audio/video signal flow, cabling, rigging, lighting, projection systems, live event production, mixing boards, cameras, streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5035931007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f80914c-588</externalid>
      <Title>Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About Role</p>
<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>
<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>
<ul>
<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>
</ul>
<ul>
<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>
</ul>
<ul>
<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>
</ul>
<ul>
<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>
</ul>
<ul>
<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>
</ul>
<ul>
<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>
</ul>
<ul>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
</ul>
<ul>
<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>
</ul>
<ul>
<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>
</ul>
<p><strong>Key Qualifications</strong></p>
<ul>
<li>3+ years of experience working in software development covering distributed systems and databases.</li>
</ul>
<ul>
<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>
</ul>
<ul>
<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>
</ul>
<ul>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>
</ul>
<ul>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>
</ul>
<ul>
<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>
</ul>
<ul>
<li>Experience with ClickHouse is a plus.</li>
</ul>
<ul>
<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>
</ul>
<ul>
<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>
</ul>
<ul>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>
<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>
<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a global network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7267602</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c661277-505</externalid>
      <Title>Customer Solutions Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>As a Customer Solutions Engineer (CSE), you will be the trusted technical advisor throughout a customer’s lifecycle. You are a product expert and will leverage your knowledge to ensure our Enterprise customers understand and utilize the Cloudflare platform to its fullest extent.</p>
<p>Responsibilities</p>
<p>As a critical member of the Account Team you will serve as a trusted technical advisor, help expand existing business, and ensure the success of our customers:</p>
<ul>
<li>You will be part of a regional team and will work closely with CSMs supporting the regional book of business</li>
</ul>
<ul>
<li>From a technical perspective, as part of the account team, your primary responsibilities will be to deliver a timely and organized onboarding for customers, ensure customers see the full value in Cloudflare&#39;s products, and advise on technical best practices</li>
</ul>
<ul>
<li>Ensure customer retention and expansion through relationship building and participation in periodic account reviews to contribute your expertise on technical topics</li>
</ul>
<ul>
<li>Provide customers with clear proactive technical guidance and expertise across all our products</li>
</ul>
<ul>
<li>Collaborate with Customer Support, Engineering, and other teams to assist with technical escalations</li>
</ul>
<ul>
<li>Proactively identify opportunities for expansion for existing customers</li>
</ul>
<ul>
<li>Promote retention by capturing and communicating gaps in product or features</li>
</ul>
<ul>
<li>Contribute towards the success of the CSE organization through knowledge-sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping iterate on best practices</li>
</ul>
<p>Experiences might include a combination of the skills below:</p>
<ul>
<li>10 years of prior post-sales customer relationship management</li>
</ul>
<ul>
<li>Deep understanding of how the internet works and the desire to expand that knowledge</li>
</ul>
<ul>
<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP</li>
</ul>
<ul>
<li>Reverse and forward proxies and the applications of both</li>
</ul>
<ul>
<li>Security aspects of an internet property, such as Firewalls, WAFs, Bot Management, Rate Limiting, (M)TLS, Zero Trust</li>
</ul>
<ul>
<li>Performance aspects of an internet property, such as Speed, Latency, Caching, Video Streaming, HTTP/2, TLSv1.3</li>
</ul>
<ul>
<li>Enjoying the adventure of troubleshooting and solving technical problems</li>
</ul>
<ul>
<li>Understanding why Cloudflare plays an increasingly important role on today’s internet</li>
</ul>
<ul>
<li>Ability to proactively identify and solve problems then build sustainable solutions to prevent recurrence</li>
</ul>
<ul>
<li>Demonstrated experience with a scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills</li>
</ul>
<ul>
<li>Technical curiosity and passion: Cloudflare is at the cutting edge of internet technology, and our CSEs are viewed as subject-matter experts. It’s incumbent on us to stay up to date not only with Cloudflare’s specific products, but with industry trends.</li>
</ul>
<ul>
<li>Ability to manage a project, work to deadlines, and prioritize between competing demands</li>
</ul>
<ul>
<li>Fluent in English and Russian, Ukrainian, or Hebrew is a must.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value on diversity and inclusion.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TCP/IP, TLS, DNS, HTTP, Reverse and forward proxies, Firewalls, WAFs, Bot Management, Rate Limiting, (M)TLS, Zero Trust, Speed, Latency, Caching, Video Streaming, HTTP/2, TLSv1.3, Python, JavaScript, Bash</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network of cloud-based services to protect and accelerate internet applications.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7612243</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>649f0f59-66f</externalid>
      <Title>Senior Software Engineer, Applied AI</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will design and build production-grade, full-stack applications that make data accessible, actionable, and embedded within CoreWeave&#39;s core workflows. You will develop AI-enabled user experiences, scalable backend services, and intuitive interfaces that abstract away the complexity of underlying data systems.</p>
<p>Day-to-day, you&#39;ll work across the stack - from React-based frontends to backend services running on Kubernetes - while integrating AI/LLM capabilities into real-world applications. This role offers high visibility and the opportunity to directly influence how data is consumed and operationalized across the company.</p>
<p>The ideal candidate has 7+ years of experience building production-grade software applications, including both backend services and modern web frontends. They should have strong proficiency in backend programming languages (Python, Go, Java, C#) and frontend programming languages (JavaScript, TypeScript).</p>
<p>In addition to technical skills, the successful candidate will be a curious and creative problem-solver who is passionate about building user-facing applications that turn complex data into intuitive experiences. They should be able to take ownership of complex systems end-to-end, from design through deployment and iteration.</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<p>itertools[&#39;&#39;] -&gt; Be Curious at Your Core  itertools[&#39;&#39;] -&gt; Act Like an Owner  itertools[&#39;&#39;] -&gt; Empower Employees  itertools[&#39;&#39;] -&gt; Deliver Best-in-Class Client Experiences  itertools[&#39;&#39;] -&gt; Achieve More Together</p>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>backend programming languages (Python, Go, Java, C#), frontend programming languages (JavaScript, TypeScript), React, Kubernetes, AI/LLM capabilities, text-to-SQL interfaces, copilots, automated insight-generation systems, real-time data processing or streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671525006</Applyto>
      <Location>New York, NY/Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e0ce11b-ddf</externalid>
      <Title>Senior Software Engineer - Live Pay</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced backend software engineer to join our Live Pay team. As a member of our team, you&#39;ll work cross-functionally with various teams to design and develop key platform services. You&#39;ll need to be strong in JVM programming languages and event-driven architecture, in addition to AWS.</p>
<p>Your responsibilities will include driving the design and implementation of new features, creating high-quality, maintainable code, and collaborating with other engineers. You&#39;ll also work cross-functionally with other teams, including data science, design, product, marketing, and analytics.</p>
<p>To succeed in this role, you&#39;ll need 4+ years of development experience in software engineering, proficiency in at least one JVM programming language, and experience with major frameworks like Spring, Spring Boot. You&#39;ll also need hands-on experience with SQL databases, cloud environments, and streaming and messaging technologies.</p>
<p>This is a full-time position with a salary range of $199,000-$244,000, plus equity and benefits. The role will be hybrid from our Vancouver office, with 2 days a week in the office required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$199,000-$244,000</Salaryrange>
      <Skills>JVM programming languages, Event-driven architecture, AWS, Spring, Spring Boot, SQL databases, Cloud environments, Streaming and messaging technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, providing financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7747628</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7eb73baf-db6</externalid>
      <Title>Engineering Manager - Streaming</Title>
      <Description><![CDATA[<p>We are seeking a dedicated Engineering Leader to spearhead Spark Structured Streaming development initiatives. Your primary mission will be to make Spark Structured Streaming state of the art Stream Processing engine by adding advanced features such as sophisticated state management, new operators and making the engine performance both from latency and throughput point of view by reimagining engine architecture.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading a talented engineering team in Spark Structured Streaming team developing and promoting the engine in OSS and the Databricks Data Intelligence Platform</li>
<li>Overseeing sustained recruitment of top-tier talent, and upskilling talent on the team</li>
<li>Implementing robust processes to efficiently execute product vision, strategy, and roadmap in alignment with organisational goals and priorities</li>
<li>Build software that is not just high quality but easy to operate</li>
<li>Make company wide impact by driving Stream Processing adoption across the Databricks product portfolio</li>
<li>Manage technical debt, including long term technical architecture decisions and balance product roadmap</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years experience working in a related system, streaming, query processing, query optimisation, including big-data ecosystem, Apache Spark or database internal</li>
<li>A passion for database systems, storage systems, distributed systems, language design, or performance optimisation</li>
<li>Can ensure the team builds high quality and reliable infrastructure services. Experience being responsible for testing, quality, and SLAs of a product</li>
<li>Previous experience building and leading teams in a complex technical domain, such as on distributed data systems or database internals</li>
<li>Ability to attract, hire, and coach engineers who meet the Databricks hiring standards. Can up level existing team via hiring top-notch senior talent, growing leaders and helping struggling members. Can gain trust of the team and guide their careers</li>
<li>Comfortable working cross functionally with product management and directly with customers; ability to deeply understand product and customer personas</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,000-$253,750 USD</Salaryrange>
      <Skills>Apache Spark, Streaming, Query processing, Query optimisation, Big-data ecosystem, Database internal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that enables data teams to solve complex problems. It has over 10,000 organisations worldwide as customers.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8324875002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1375568-5a9</externalid>
      <Title>Customer Solutions Engineer, ANZ</Title>
      <Description><![CDATA[<p>As a Customer Solutions Engineer (CSE), you will be the trusted technical advisor throughout a customer&#39;s lifecycle. You are a product expert and will leverage your knowledge to ensure our Enterprise customers understand and utilize the Cloudflare platform to its fullest extent.</p>
<p>Your goal is to help customers be successful and derive the most value possible from their Cloudflare investment. As a CSE, you strive to understand customer requirements and issues at the molecular level. No matter your background, you have natural curiosity and desire to identify root causes of unique problems and find the most elegant and efficient solutions.</p>
<p>Fundamentally, you are enamored with how the internet works. You will work closely with the Customer Success Manager (CSM) as well as every other team at Cloudflare, from Sales and Product to Engineering and Customer Support.</p>
<p>Responsibilities:</p>
<ul>
<li>Serve as a trusted technical advisor, help expand existing business, and ensure the success of our customers</li>
</ul>
<ul>
<li>Be part of a regional team and work closely with CSMs supporting the regional book of business</li>
</ul>
<ul>
<li>From a technical perspective, as part of the account team and your primary responsibilities, deliver a timely and organized onboarding for customers, ensure customers see the full value in Cloudflare&#39;s products, and advise on technical best practices</li>
</ul>
<ul>
<li>Ensure customer retention and expansion through relationship building and participation in periodic account reviews to contribute your expertise on technical topics</li>
</ul>
<ul>
<li>Provide customers with clear proactive technical guidance and expertise across all our products</li>
</ul>
<ul>
<li>Collaborate with Customer Support, Engineering, and other teams to assist with technical escalations</li>
</ul>
<ul>
<li>Proactively identify opportunities for expansion for existing customers</li>
</ul>
<ul>
<li>Promote retention by capturing and communicating gaps in product or features</li>
</ul>
<ul>
<li>Contribute towards the success of the CSE organization through knowledge-sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping iterate on best practices</li>
</ul>
<ul>
<li>The role requires 20-50% travel to attend meetings with customers, attend conferences, and other industry events, and to collaborate with your Cloudflare teammates</li>
</ul>
<p>Experiences might include a combination of the skills below:</p>
<ul>
<li>5+ years of prior post-sales customer relationship management</li>
</ul>
<ul>
<li>Deep understanding of how the internet works and the desire to expand that knowledge. For example:</li>
</ul>
<ul>
<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP</li>
</ul>
<ul>
<li>Reverse and forward proxies and the applications of both</li>
</ul>
<ul>
<li>Security aspects of an internet property, such as Firewalls, WAFs, Bot Management, Rate Limiting, (M)TLS, Zero Trust</li>
</ul>
<ul>
<li>Performance aspects of an internet property, such as Speed, Latency, Caching, Video Streaming, HTTP/2, TLSv1.3</li>
</ul>
<ul>
<li>Enjoying the adventure of troubleshooting and solving technical problems</li>
</ul>
<ul>
<li>Understanding why Cloudflare plays an increasingly important role on today’s internet</li>
</ul>
<ul>
<li>Ability to proactively identify and solve problems, then build sustainable solutions to prevent recurrence</li>
</ul>
<ul>
<li>Demonstrated experience with a scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills</li>
</ul>
<ul>
<li>Technical curiosity and passion: Cloudflare is at the cutting edge of internet technology, and our CSEs are viewed as subject-matter experts. It’s incumbent on us to stay up to date not only with Cloudflare’s specific products, but with industry trends.</li>
</ul>
<ul>
<li>Ability to manage a project, work to deadlines, and prioritize between competing demands</li>
</ul>
<p>Bonus!</p>
<ul>
<li>Understanding of, or experience with, regulatory requirements such a PCI DSS, HIPAA, and SOC-2</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>deep understanding of how the internet works, TCP/IP, TLS, DNS, HTTP, reverse and forward proxies, Firewalls, WAFs, Bot Management, Rate Limiting, (M)TLS, Zero Trust, Speed, Latency, Caching, Video Streaming, HTTP/2, TLSv1.3, scripting language (e.g. Python, JavaScript, Bash), project management, deadlines, prioritization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7667911</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9583eabd-79a</externalid>
      <Title>Technical Support Engineer, Application Performance (Mexico City)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. We run one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Technical Support Engineer, you will work directly with our customers and cross-functionally to solve a variety of technical issues. This is a position where you will learn the inner workings of Cloudflare&#39;s products and gain a deeper understanding of internet technologies.</p>
<p>Responsibilities</p>
<p>Do you love solving complex technical problems and interacting with people? Are you passionate about helping customers and are a standout colleague? Cloudflare is seeking a Technical Customer Support Engineer to join our team.</p>
<p>You will work directly with our customers and cross-functionally to solve a variety of technical issues. This is a position where you will learn the inner workings of Cloudflare&#39;s products and gain a deeper understanding of internet technologies.</p>
<p>Requirements</p>
<ul>
<li>Experience: 2+ years of experience in a Technical Support role, Web Developer Support, or a similar position, with a proven track record of resolving diverse technical issues or foundational experience gained through a relevant degree and projects, internships, or academic coursework that demonstrates strong technical aptitude and problem-solving skills.</li>
</ul>
<ul>
<li>Community Engagement: Active participation in web development communities, with a demonstrated commitment to staying current with industry trends and sharing knowledge with peers.</li>
</ul>
<ul>
<li>Internet Fundamentals: Fundamental understanding of how the Internet works (OSI Model), with knowledge of Cloudflare&#39;s products that impact Layers 3, 4, and 7.</li>
</ul>
<ul>
<li>Technical Proficiency: Skilled in analyzing and troubleshooting DNS, SSL/TLS, HTTP, and optimizing website performance.</li>
</ul>
<ul>
<li>Tooling Expertise: Proficient in command line interfaces and experienced with tools such as browser developer tools, cURL, Git/Wrangler/npm, Postman, TCPDump/Wireshark, SSH, OpenSSL, and similar utilities.</li>
</ul>
<ul>
<li>Video Technology: Experienced with video encoding and streaming solutions, understanding the associated technical challenges.</li>
</ul>
<ul>
<li>Scripting Skills: Competent in writing scripts in Bash, Python, JavaScript, or other scripting languages.</li>
</ul>
<ul>
<li>Customer-Centric Communication: Comfortable communicating through various support channels, with a strong commitment to putting the customer first in every interaction.</li>
</ul>
<p>Bonus</p>
<ul>
<li>You have a site actively using our platform</li>
</ul>
<ul>
<li>You are familiar with any of the following Cloudflare products: Cloudflare Workers, Stream, Pages, R2</li>
</ul>
<p>Availability And Schedule Requirements</p>
<ul>
<li>Flexibility to work varying work schedules including; Pacific time, holidays, weekends, more than 5 days in a row, or additional hours</li>
</ul>
<ul>
<li>Ability to work on-site as needed out of one of our US based offices</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>DNS, SSL/TLS, HTTP, optimizing website performance, command line interfaces, browser developer tools, cURL, Git/Wrangler/npm, Postman, TCPDump/Wireshark, SSH, OpenSSL, video encoding and streaming solutions, Bash, Python, JavaScript, customer-centric communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that runs one of the world&apos;s largest networks, powering millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7075269</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ec7cc743-ef4</externalid>
      <Title>Senior Software Engineer II, Inference</Title>
      <Description><![CDATA[<p>We&#39;re seeking a senior software engineer to join our team and lead the design and development of our Kubernetes-native inference platform. As a senior engineer, you will be responsible for leading design reviews, driving architecture, and ensuring the reliability and scalability of our platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading design reviews and driving architecture within the team</li>
<li>Defining and owning SLIs/SLOs and ensuring post-incident actions land and reliability improves release-over-release</li>
<li>Implementing advanced optimizations such as micro-batch schedulers, speculative decoding, and KV-cache reuse</li>
<li>Strengthening incident posture through capacity planning, autoscaling policy, and rollback/traffic-shift strategies</li>
<li>Mentoring IC1/IC2 engineers and reviewing cross-team designs to elevate coding/testing standards</li>
</ul>
<p>We&#39;re looking for someone with strong coding skills in Python or Go, deep familiarity with networked systems and performance, and hands-on experience with Kubernetes at production scale. If you have experience with inference internals, batching, caching, mixed precision, and streaming token delivery, that&#39;s a plus.</p>
<p>In addition to a competitive salary, we offer a range of benefits including medical, dental, and vision insurance, company-paid life insurance, and flexible PTO. We&#39;re committed to creating a work environment that&#39;s inclusive, diverse, and supportive of our employees&#39; well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Python, Go, Kubernetes, Networked systems, Performance, Inference internals, Batching, Caching, Mixed precision, Streaming token delivery, CUDA kernels, NCCL/SHARP, RDMA/NUMA, GPU interconnect topologies, Contributions to inference frameworks, Experience with multi-team initiatives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4604832006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b47cf70c-31a</externalid>
      <Title>Director,Technical Solutions (Big Data/ AI)</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>The Director of Data &amp; AI Support Engineering - Bangalore will lead and grow a regional team of Data &amp; AI technical experts in India, focused on providing resiliency and smooth operation of customer production workloads.</p>
<p>This leader will oversee support operations during APJ and EMEA business hours with close alignment with other global teams to ensure 24x7 support coverage through coordination with other regions.</p>
<p>The team resolves complex and long-running data engineering use cases raised by Databricks customers to support the success of live use cases - which includes performance optimization, ensuring resiliency of production jobs, helping customers stabilize workloads on new products and features, and more.</p>
<p>Reporting to the Global Lead of Frontline Support Engineering - Data &amp; AI, you will be able to understand the real-world business problems our customers are solving with data and are committed to helping them achieve reliability and performance of their systems to meet their goals.</p>
<p><strong>The Impact You Will Have:</strong></p>
<ul>
<li>Serve as the India site leader for an elite team of Data &amp; AI specialists that can provide coverage of customers across EMEA &amp; APJ business hours.</li>
</ul>
<ul>
<li>Grow the technical expertise of the team to support successful adoption of new products and features of the Databricks platform for customer production workloads.</li>
</ul>
<ul>
<li>Engage with top customers to understand how to support their business needs with their Data &amp; AI strategy, in collaboration with field engineering and sales when required.</li>
</ul>
<ul>
<li>Partner with internal product engineering teams to make Databricks products better and more supportable.</li>
</ul>
<ul>
<li>Understand how to maintain high reliability of the Databricks platform to successfully achieve customer business goals.</li>
</ul>
<p><strong>Competencies &amp; Requirements:</strong></p>
<ul>
<li>Proven people leadership experience: at least 6+ years as a manager of managers.</li>
</ul>
<ul>
<li>18+ years in the IT industry, with a strong background in Software Engineering with specialization in Data Engineering, ideally with Big Data &amp; Cloud experience.</li>
</ul>
<ul>
<li>Experience leading large teams (100+ employees) in engineering, technical support, or consulting. Support experience is not required - but customer facing experience is highly desirable.</li>
</ul>
<ul>
<li>Hands-on experience in at least two of the following at production scale:</li>
</ul>
<ul>
<li>Big Data (Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Machine Learning / Artificial Intelligence projects</li>
</ul>
<ul>
<li>Data Science / Streaming use cases</li>
</ul>
<ul>
<li>Spark expertise is a big advantage.</li>
</ul>
<ul>
<li>Strong background in customer-facing support leadership roles.</li>
</ul>
<ul>
<li>Excellent troubleshooting skills across distributed systems.</li>
</ul>
<ul>
<li>Strong ownership mindset with ability to thrive in a fast-paced, startup-like environment with evolving needs.</li>
</ul>
<ul>
<li>Bachelor’s/Master’s in Computer Science or equivalent technical field.</li>
</ul>
<p><strong>Benefits:</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p><strong>Our Commitment to Diversity and Inclusion:</strong></p>
<p>We are committed to fostering an inclusive culture where everyone feels valued, respected, and empowered to contribute their best work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Machine Learning, Artificial Intelligence, Data Science, Streaming use cases, Spark, Hadoop, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8409447002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>67b4ccd7-51d</externalid>
      <Title>Senior Software Engineer, Observability Insights</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>
<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>
<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>
<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>
<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>
<p><strong>About the role</strong></p>
<ul>
<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>
</ul>
<ul>
<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>
</ul>
<ul>
<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>
</ul>
<ul>
<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>
</ul>
<ul>
<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>
</ul>
<ul>
<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>
</ul>
<ul>
<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>
</ul>
<ul>
<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>
</ul>
<ul>
<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>
</ul>
<ul>
<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast!</p>
<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>
<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650163006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9701c504-1a6</externalid>
      <Title>Senior Software Engineer I, Inference</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer I to join our team. As a senior engineer, you&#39;ll lead designs, raise engineering standards, and deliver measurable improvements to latency, throughput, and reliability across multiple services. You&#39;ll partner with product, orchestration, and hardware teams to evolve our Kubernetes-native inference platform and meet strict P99 SLAs at scale.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Lead design reviews and drive architecture within the team; decompose multi-service work into clear milestones.</li>
<li>Define and own SLIs/SLOs; ensure post-incident actions land and reliability improves release-over-release.</li>
<li>Implement advanced optimizations (e.g., micro-batch schedulers, speculative decoding, KV-cache reuse) and quantify impact.</li>
<li>Strengthen incident posture: capacity planning, autoscaling policy, graceful degradation, rollback/traffic-shift strategies.</li>
<li>Mentor IC1/IC2 engineers; review cross-team designs and elevate coding/testing standards.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>3-5 years of industry experience building distributed systems or cloud services.</li>
<li>Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.</li>
<li>Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry).</li>
<li>Practical knowledge of inference internals: batching, caching, mixed precision (BF16/FP8), streaming token delivery.</li>
<li>Proven track record improving tail latency (P95/P99) and service reliability through metrics-driven work.</li>
</ul>
<p>Preferred qualifications include contributions to inference frameworks, experience with CUDA kernels, NCCL/SHARP, RDMA/NUMA, or GPU interconnect topologies, and leading multi-team initiatives or partnering with customers on mission-critical launches.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>Python, Go, Kubernetes, CI/CD, Observability stacks, Inference internals, Batching, Caching, Mixed precision, Streaming token delivery, Contributions to inference frameworks, CUDA kernels, NCCL/SHARP, RDMA/NUMA, GPU interconnect topologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647603006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ece4c581-f94</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a provider of identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7774364</Applyto>
      <Location>New York, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>beda8a58-d75</externalid>
      <Title>Staff Technical Producer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Technical Producer to lead the execution and evolution of our internal live and hybrid production capabilities. This role is focused on delivering high-quality experiences for employee communications, executive broadcasts, training, and internal events.</p>
<p>You&#39;ll operate at the intersection of live production, broadcast engineering, and internal tooling, ensuring our teams can communicate effectively at scale. In addition to running critical productions, you&#39;ll define the workflows, and standards that power internal media across offices and remote environments globally.</p>
<p>This is a highly cross-functional role requiring strong technical depth, operational excellence, and the ability to support a wide range of internal stakeholders.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead Internal Productions</li>
<li>Own technical production for internal events including all-hands, executive communications, trainings, and internal broadcasts</li>
<li>Serve as Technical Director or Lead Producer for high-visibility internal moments</li>
<li>Ensure seamless delivery across in-room, livestream, and video conferencing audiences</li>
<li>Build &amp; Scale Internal Production Systems</li>
<li>Design and standardize workflows across internal event spaces, studios, and remote setups</li>
<li>Define and maintain a production playbook to ensure consistency across internal events</li>
<li>Improve and scale internal production infrastructure to support global teams</li>
<li>Operate at a Deep Technical Level</li>
<li>Lead multi-destination signal workflows (In room, live stream, recording, conferencing platforms)</li>
<li>Integrate AV systems with internal tools (e.g., video conferencing platforms, collaboration tools)</li>
<li>Troubleshoot live production issues quickly with minimal disruption to internal audiences</li>
<li>Partner Across the Company</li>
<li>Work directly with internal stakeholders (Execs, Comms, HR, Enablement, Events) to define production needs</li>
<li>Translate business requirements into clear, executable production plans</li>
<li>Coordinate internal teams and external vendors when needed</li>
<li>Elevate Internal Experience Quality</li>
<li>Continuously improve the quality, reliability, and scalability of internal productions</li>
<li>Create training materials and documentation to enable self-service and team growth</li>
<li>Mentor team members and elevate production capabilities across offices</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Core Requirements</li>
<li>8+ years in live production, broadcast, or media production roles</li>
<li>Strong expertise in:</li>
</ul>
<p>+ Audio/video signal flow     + Video switching / technical direction     + Live streaming and remote production tools (e.g., VMix or similar)     + Experience supporting distributed or multi-location production environments</p>
<ul>
<li>Strong communication skills, especially when working with non-technical stakeholders</li>
<li>Experience operating in fast-paced, high-growth environments</li>
<li>BA/BS or equivalent practical experience</li>
<li>Staff-Level Signals</li>
<li>You&#39;ve supported executive-facing or high-visibility internal communications</li>
<li>You design systems and workflows that scale across teams and locations</li>
<li>You operate with high ownership and autonomy in ambiguous environments</li>
<li>You proactively identify risks and ensure reliability for critical internal moments</li>
<li>You influence stakeholders across functions without direct authority</li>
<li>Nice to Have</li>
<li>Experience supporting internal communications, corporate events, or enablement programs</li>
<li>Familiarity with enterprise video conferencing and collaboration tools</li>
<li>Experience with IP-based video (NDI, SRT, 2110) or cloud-based production workflows</li>
<li>Background in motion graphics or internal content production</li>
</ul>
<p><strong>Why This Role Matters</strong></p>
<p>Internal communication is critical to how Databricks operates and scales. In this role, you&#39;ll ensure that employees,from executives to new hires,can connect, learn, and align through high-quality live experiences.</p>
<p>You won&#39;t just run internal events,you&#39;ll build the systems that make internal communication seamless, reliable, and scalable across the company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$177,700-$244,300 USD</Salaryrange>
      <Skills>Audio/video signal flow, Video switching / technical direction, Live streaming and remote production tools, Experience supporting distributed or multi-location production environments, Strong communication skills, Experience supporting internal communications, corporate events, or enablement programs, Familiarity with enterprise video conferencing and collaboration tools, Experience with IP-based video (NDI, SRT, 2110) or cloud-based production workflows, Background in motion graphics or internal content production</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8472697002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9aa81908-c43</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions for businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437974</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>60aae9e8-e8b</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>
<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>
<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>
<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>
<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400374002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbeabfab-916</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>
<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>
<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>
<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>
<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>
<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>
<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>
<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $145,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4587675006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ab2d4d68-d1c</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>We are seeking a talented Software Engineer to join our X Money team, focused on building a revolutionary global payment network that will serve over 600 million users and rival the world&#39;s largest financial institutions.</p>
<p>In this role, you will specialise in backend development, designing and optimising robust microservices to ensure scalability, security, and reliability. You will support full-stack efforts, collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and optimise microservices for high-concurrency transactions using Go, Postgres, and Kafka.</li>
<li>Collaborate on system architecture, testing, and monitoring to ensure uptime and performance.</li>
<li>Build internal tools using frontend technologies as needed to support operational efficiency.</li>
<li>Support the Technical Lead in risk mitigation and align with engineering, product, and compliance teams to drive project success.</li>
<li>Contribute to the development of secure, scalable systems for handling financial data and transactions.</li>
<li>Iterate rapidly on feedback to deliver high-quality solutions in a dynamic environment.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>5+ years of software engineering experience, with a strong focus on backend development.</li>
<li>Proficiency in Go or similar languages and experience with databases (e.g., Postgres) and streaming systems (e.g., Kafka).</li>
<li>Familiarity with building distributed systems for high-scale, low-latency environments.</li>
<li>Knowledge of handling secure financial data.</li>
<li>Ability to contribute to frontend development for internal tools when required.</li>
<li>Strong communication and problem-solving skills, with a collaborative mindset.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in fintech, payments, or regulatory frameworks (e.g., PCI-DSS, AML/KYC).</li>
<li>Prior work in a fast-paced, startup-like environment on greenfield projects.</li>
<li>Comfort navigating ambiguous requirements and iterating based on feedback.</li>
<li>Passion for leveraging AI to transform financial systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Postgres, Kafka, backend development, databases, streaming systems, secure financial data, frontend development, fintech, payments, regulatory frameworks, PCI-DSS, AML/KYC, fast-paced environment, greenfield projects, AI transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5007310007</Applyto>
      <Location>Tokyo, JP</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64989723-d54</externalid>
      <Title>Staff Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Streaming Foundations team. As a Staff Software Engineer, you will help set the technical direction for the team and influence the engineering roadmap for the Platform&#39;s streaming capabilities. You will design and lead the implementation of our most complex and critical systems for data-intensive use cases. You will research and champion new technologies and architectural patterns to solve strategic challenges and scale the platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Helping set the technical direction for the team and influencing the engineering roadmap for the Platform&#39;s streaming capabilities</li>
<li>Designing and leading the implementation of our most complex and critical systems for data-intensive use cases</li>
<li>Researching and championing new technologies and architectural patterns to solve strategic challenges and scale the platform</li>
<li>Leading and influencing cross-functional initiatives, ensuring technical alignment and successful execution across multiple teams</li>
<li>Improving the operational posture of our systems by designing for observability, reliability, and scalability, and by mentoring others in operational best practices</li>
<li>Coaching and mentoring senior engineers and acting as a technical leader across the engineering organization</li>
</ul>
<p>You will bring to our teams:</p>
<ul>
<li>5+ years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p>Extra points:</p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Golang, Java, database fundamentals, event streaming technologies, Kafka, scalable systems, secure systems, TypeScript, React, cloud providers, container technologies, Kubernetes, Docker, Identity and Access Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630523</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f18e7306-00c</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Apache Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, Databricks, CI/CD, MLOps, technical project delivery, documentation, white-boarding, client management, conflict management, scalable streaming, batch solutions, cloud-native components</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a companies that provides data and AI solutions. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461325002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>06fc58d3-e57</externalid>
      <Title>Senior Software Engineer -Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Payments AI/ML Foundation team. As a member of this team, you will design and own core platform components that power AI across Airbnb Payments.</p>
<p>The Payments AI/ML Foundation team builds the shared platforms, services, and guardrails that enable product and data teams across Airbnb Payments to deliver trustworthy, efficient, and scalable AI capabilities.</p>
<p>As a Senior Software Engineer on Payments AI/ML Foundation, you will:</p>
<ul>
<li>Design and own core platform components that power AI across Airbnb Payments</li>
<li>Partner with product, infra, data science, and operations to translate ambiguous requirements into robust systems with clear SLAs</li>
<li>Build tooling and automation that makes AI development safer and faster</li>
<li>Raise the bar on reliability, performance, and governance for models and agents in production</li>
</ul>
<p>A typical day for this role will involve envisioning, championing, and supporting the development of novel ML systems, product integrations, and performance optimizations to solve real-world problems.</p>
<p>You will work collaboratively with cross-functional partners including software engineers, product managers, operations and data scientists, identify opportunities for business impact, understand, refine, and prioritize requirements for AI/ML models, drive engineering decisions, and quantify impact.</p>
<p>Hands-on productionize and operate AI/ML solution and pipelines at scale, including both batch and real-time use cases.</p>
<p>Lead, mentor, challenge and grow enthusiastic, collaborative AI/ML culture within the organization.</p>
<p>Your expertise should include:</p>
<ul>
<li>A degree in Computer Science or equivalent qualification</li>
<li>7+ years of industry experience in backend/platform engineering (or equivalent), including ownership of production systems</li>
<li>Strong programming skills in Python, plus solid data engineering foundations</li>
<li>Good to have experience with AI frameworks (Langraph, Strand etc), orchestration (Airflow/Kubeflow), and streaming/processing (Kafka/Spark/Ray)</li>
<li>Proven track record building observability for AI systems (metrics/logging/traces), with automated alerting, dashboards, and SLO management</li>
<li>Experience designing model governance and safety guardrails (prompt/version control, red-teaming, policy checks, approvals)</li>
<li>Demonstrated ability to drive performance/cost optimizations (profiling, batching, caching, quantization, autoscaling) in high-traffic environments</li>
<li>Practical familiarity with vector search/embedding pipelines (indexing strategies, consistency, reindexing, retention)</li>
</ul>
<p>Our commitment to inclusion and belonging:</p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions.</p>
<p>All qualified individuals are encouraged to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, AI frameworks, Orchestration, Streaming/processing, Observability, Model governance, Performance/cost optimizations, Vector search/embedding pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7581839</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6d7f1a0-882</externalid>
      <Title>Resident Solutions Architect - Mumbai</Title>
      <Description><![CDATA[<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>
<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>
<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>
<li>Providing expert-level technical guidance and support to customers during the implementation process</li>
<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>
<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>
<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>
<li>Comfortable writing code in Python, PySpark, and Scala</li>
<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>
<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>
<li>Expertise in Azure</li>
<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>
<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>
<li>Ability to work with Partner Organization and deliver complex programs</li>
<li>Ability to lead large technical delivery teams</li>
<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>
<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>
<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>
<li>Willingness to travel for onsite customer engagements within India</li>
<li>Documentation and white-boarding skills</li>
</ul>
<p>Good-to-have Skills:</p>
<ul>
<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>
<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>
<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>
<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>
<li>Expertise on cloud platforms like AWS and GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8107166002</Applyto>
      <Location>Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aae5c27d-20b</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7436028</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>456f029f-2e2</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>
<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>
<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>
<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>
<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>
<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>
<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>
<li>Implement CDC pipelines and calculated field processing for derived data views.</li>
<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>
<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>
<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>
<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>
<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>
<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>
<li>10+ years of software engineering experience building large-scale data platforms.</li>
<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>
<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>
<li>Proven experience with GCP or AWS and cloud-native architectures.</li>
<li>Experience with streaming/real-time data processing technologies.</li>
<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>
<li>Hands-on experience with Google Cloud Platform services.</li>
<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>
<li>Experience solving data freshness and consistency challenges in distributed systems.</li>
<li>Background in building observability and monitoring solutions for data platforms.</li>
<li>Familiarity with metadata management and schema evolution.</li>
<li>Experience with Kubernetes for deploying data services.</li>
<li>SQL query optimization and performance tuning expertise.</li>
<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>
<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>
<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>
<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>
<li>Passion for building reliable, observable, and maintainable systems.</li>
<li>Experience promoting diverse, inclusive work environments.</li>
</ul>
<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>
<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>
<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>
<p>$163,800-$257,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8243004002</Applyto>
      <Location>Remote-US-CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd9625d9-99b</externalid>
      <Title>ML Infrastructure Engineer, Safeguards</Title>
      <Description><![CDATA[<p>We are seeking a Machine Learning Infrastructure Engineer to join our Safeguards organization, where you&#39;ll build and scale the critical infrastructure that powers our AI safety systems.</p>
<p>As part of the Safeguards team, you&#39;ll design and implement ML infrastructure that powers Claude safety. Your work will directly contribute to making AI systems more trustworthy and aligned with human values, ensuring our models operate safely as they become more capable.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem</li>
<li>Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications</li>
<li>Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems</li>
<li>Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards</li>
<li>Implement automated testing, deployment, and rollback systems for ML models in production safety applications</li>
<li>Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs</li>
<li>Contribute to the development of internal tools and frameworks that accelerate safety research and deployment</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment</li>
<li>Are proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX</li>
<li>Have hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes)</li>
<li>Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads</li>
<li>Have experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems)</li>
<li>Are results-oriented, with a bias towards reliability and impact in safety-critical systems</li>
<li>Enjoy collaborating with researchers and translating cutting-edge research into production systems</li>
<li>Care deeply about AI safety and the societal impacts of your work</li>
</ul>
<p>Strong candidates may have experience with:</p>
<ul>
<li>Working with large language models and modern transformer architectures</li>
<li>Implementing A/B testing frameworks and experimentation infrastructure for ML systems</li>
<li>Developing monitoring and alerting systems for ML model performance and data drift</li>
<li>Building automated labeling systems and human-in-the-loop workflows</li>
<li>Experience in trust &amp; safety, fraud prevention, or content moderation domains</li>
<li>Knowledge of privacy-preserving ML techniques and compliance requirements</li>
<li>Contributing to open-source ML infrastructure projects</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, JAX, Cloud platforms (AWS, GCP), Container orchestration (Kubernetes), Distributed systems principles, Data engineering tools (Spark, Airflow, streaming systems), Large language models and modern transformer architectures, A/B testing frameworks and experimentation infrastructure for ML systems, Monitoring and alerting systems for ML model performance and data drift, Automated labeling systems and human-in-the-loop workflows, Trust &amp; safety, fraud prevention, or content moderation domains, Privacy-preserving ML techniques and compliance requirements, Open-source ML infrastructure projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4778843008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5aacaad3-05b</externalid>
      <Title>Senior Machine Learning Engineer, Payments</Title>
      <Description><![CDATA[<p>Job Title: Senior Machine Learning Engineer, Payments</p>
<p>Location: Remote-USA</p>
<p>The Payments team at Airbnb is responsible for everything related to settling money in Airbnb&#39;s global marketplace. As a Senior Machine Learning Engineer for Payments, you will be the catalyst that transforms bold AI innovation into production systems that make Airbnb Payment experience feel effortless and secure.</p>
<p>Responsibilities:</p>
<ul>
<li>Spearhead LLM agents, real-time anomaly detectors, and other breakthrough solutions that solve real-world problems and create product magic.</li>
</ul>
<ul>
<li>Collaborate with product, engineering, ops, and data science to spot high-leverage opportunities, refine AI/ML requirements, make principled architecture choices, and measure business value with clear, data-driven metrics.</li>
</ul>
<ul>
<li>Design, train, deploy, and operate large-scale AI applications for both batch and streaming workloads, ensuring low latency, high reliability, and continuous improvement via automated monitoring and retraining loops.</li>
</ul>
<ul>
<li>Mentor and inspire teammates, fostering a collaborative, experimentation-driven environment where cutting-edge research meets production excellence and every engineer is empowered to push AI boundaries at Airbnb.</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>5+ years of industry experience in applied AI/ML, inclusive MS or PhD in relevant fields.</li>
</ul>
<ul>
<li>Strong programming (Python/Java) and data engineering skills.</li>
</ul>
<ul>
<li>Proven mastery of modern AI/LLM workflows , prompt engineering, fine-tuning (LoRA, RLHF), hallucination mitigation, safety guardrails, and rigorous online/offline testing to minimize training/inference drift and ensure reliable outcomes.</li>
</ul>
<ul>
<li>Hands-on experience with at least three of the following: PyTorch/TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms (Kubeflow, Airflow), large-scale data streaming &amp; processing (Spark, Ray, Kafka).</li>
</ul>
<ul>
<li>Demonstrated success designing, deploying, and monitoring production AI systems , e.g., personalization engines, generative content services , complete with drift/cost/latency monitoring, automated retraining triggers, and cross-functional collaboration that translates ambiguous business needs into measurable AI impact.</li>
</ul>
<ul>
<li>Prior knowledge of AI/ML applications in the Payments domain is highly desirable.</li>
</ul>
<p>Our Commitment To Inclusion &amp; Belonging:</p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively led people, and to develop the best products, services, and solutions.</p>
<p>How We&#39;ll Take Care of You:</p>
<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs, and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Pay Range: $191,000-$223,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$191,000-$223,000 USD</Salaryrange>
      <Skills>Python, Java, PyTorch, TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms, large-scale data streaming &amp; processing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7755758</Applyto>
      <Location>Remote-USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0296d297-399</externalid>
      <Title>Engineering Manager, SSCS: AI Governance</Title>
      <Description><![CDATA[<p>As the Engineering Manager, AI Governance, you&#39;ll lead the team building a paid SKU that helps regulated enterprise customers govern GitLab Duo agent activity across the software development lifecycle.</p>
<p>This role sits at the center of GitLab&#39;s AI and security strategy: you&#39;ll build and support the engineering team, create predictable delivery across a multi-phase roadmap, and help bring visibility, control, and audit evidence into GitLab for customers with strict compliance needs.</p>
<p>You&#39;ll report to the SSCS Senior Engineering Manager and work closely with Product and Design partners to turn a fast-moving market need into a reliable product.</p>
<p>In your first year, you&#39;ll shape how the team operates, grow the organization, and drive delivery across core areas including the audit event system, policy enforcement capabilities, and governance reporting experiences.</p>
<p>This is a strong fit if you&#39;re energized by building teams and products at the same time, especially in areas where AI, compliance, and software supply chain security come together.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the AI Governance engineering team and support its growth as the product and roadmap expand, building a high-performing organization that delivers roadmap commitments on schedule.</li>
</ul>
<ul>
<li>Own delivery planning and execution across the AI Governance roadmap, including audit events, registry and policy controls, and governance reporting, to ship key milestones on schedule and keep roadmap delivery predictable.</li>
</ul>
<ul>
<li>Build the team by partnering with Talent Acquisition, running hiring processes, and helping attract backend engineering talent across levels to meet hiring goals tied to roadmap needs.</li>
</ul>
<ul>
<li>Partner with Product, Design, and peer engineering leaders to prioritize work, plan capacity, and maintain clear alignment on scope and sequencing to reduce delivery delays and tradeoffs.</li>
</ul>
<ul>
<li>Collaborate with the Duo Agent Platform team and other adjacent teams to deliver systems that work reliably across product boundaries and reduce integration issues in production.</li>
</ul>
<ul>
<li>Develop engineers through regular 1:1s, performance feedback, and career development conversations in an all-remote environment to support team growth and improve retention.</li>
</ul>
<ul>
<li>Drive engineering quality through strong testing practices, sound architecture, and a delivery cadence that builds customer trust and reduces production defects.</li>
</ul>
<ul>
<li>Represent the team in stage planning and section-level leadership reviews, providing clear updates on progress, risks, and tradeoffs to support timely decisions and keep roadmap execution on track.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Over 3 years of experience leading backend product engineering teams in areas such as security, compliance, observability, or AI-related systems.</li>
</ul>
<ul>
<li>Technical knowledge of audit systems, event streaming, policy enforcement, or compliance tooling, with the ability to guide architectural decisions.</li>
</ul>
<ul>
<li>Track record of hiring, developing, and supporting engineers across different levels and helping teams grow sustainably.</li>
</ul>
<ul>
<li>Comfort working in an asynchronous, documentation-focused organization with collaborators across multiple time zones.</li>
</ul>
<ul>
<li>Ability to manage cross-functional work involving Product, Design, Legal, and adjacent engineering teams.</li>
</ul>
<ul>
<li>Familiarity with compliance, audit, or governance products, especially in environments serving regulated organizations.</li>
</ul>
<ul>
<li>Understanding of AI agent infrastructure, large language model orchestration, or Model Context Protocol tooling, with the ability to apply that knowledge to technical direction and team planning.</li>
</ul>
<ul>
<li>Ability to recognize transferable experience and evaluate candidates based on relevant skills across enterprise software, distributed systems, or regulated product environments.</li>
</ul>
<p>About the team: The AI Governance team is part of GitLab&#39;s Software Supply Chain Security stage and focuses on making Duo agent activity inside GitLab auditable, policy-governed, and reportable for enterprise compliance use cases.</p>
<p>We work closely with a peer Engineering Manager, a Product Manager, and a Designer, and collaborate asynchronously with partner teams across regions to deliver governance capabilities that fit naturally into GitLab&#39;s platform.</p>
<p>Our work is centered on helping regulated customers adopt AI with confidence while GitLab expands its AI-powered offerings.</p>
<p>For more on how related teams work, see Team Handbook Page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>audit systems, event streaming, policy enforcement, compliance tooling, backend product engineering, security, compliance, observability, AI-related systems, large language model orchestration, Model Context Protocol tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8477935002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cd0420a-99d</externalid>
      <Title>Network Engineer, Capacity and Efficiency</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re looking for a network engineer who thinks in metrics first. You will use deep networking knowledge and rigorous measurement to figure out where and how bandwidth, latency, and dollars are being used, find optimization opportunities and land them.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build the network observability stack. Design and deploy telemetry pipelines , sFlow/IPFIX, gNMI streaming, eBPF host probes , that turn packet counters into per-flow, per-tenant, per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.</li>
<li>Hunt for efficiency. Analyze inter-region traffic patterns, identify hot links and stranded capacity, and quantify the dollar impact. Build the models that tell us whether we should buy more capacity, or move the workload.</li>
<li>Own QoS and traffic engineering. Design and operate traffic classification, marking, and shaping across the backbone. Make sure bulk checkpoint transfers don’t starve latency-sensitive inference, and that we’re not paying premium cross-region rates for traffic that could take the cheap path.</li>
<li>Drive cost attribution. Tie network spend , egress, interconnect ports, transit, optical leases , back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.</li>
<li>Influence decisions you don&#39;t own. A large fraction of this role is convincing other teams to act on what your data shows: making the case to research that a traffic pattern needs to change, to finance that an interconnect tranche is worth buying, to Systems Networking that a QoS policy needs rewriting.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have 5+ years operating large-scale production networks , data center fabrics (spine-leaf, Clos), backbone/WAN, or hyperscaler-adjacent environments.</li>
<li>Are genuinely fluent across the stack: BGP (including policy and communities), ECMP, VXLAN/EVPN or equivalent overlays, QoS (DSCP, queuing, shaping), and L1/optical basics (DWDM, coherent, LAGs).</li>
<li>Know at least one major CSP’s networking model deeply , AWS (VPC, TGW, Direct Connect, Gateway Load Balancer) or GCP (Shared VPC, Interconnect, Cloud Router, Network Connectivity Center) , and understand how their overlays interact with physical underlays.</li>
<li>Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig), flow export (sFlow, IPFIX, NetFlow), or eBPF-based host-side instrumentation. You can reason about sampling, cardinality, and storage tradeoffs.</li>
<li>Comfortable writing Python or Go to build tooling, telemetry pipelines, infrastructure-as-code, config management for network devices and automation, that you’ll ship to production.</li>
<li>Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion, and you can turn messy counter data into a defensible cost model.</li>
<li>Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters, and to a network engineer why a specific ECMP imbalance is costing real money.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>SRE experience for large-scale network infrastructure , designing for reliability, defining SLOs/SLIs for network services, capacity planning with error budgets, and incident response for network-impacting outages at scale.</li>
<li>Background on a cloud provider&#39;s networking team or a cloud networking product team , building or operating the interconnect, backbone, or SDN control plane from the provider side, not just consuming it as a customer.</li>
<li>Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce, all-gather), checkpoint/weight transfer, inference serving, and how these stress networks differ than traditional workloads in terms of burst behavior, flow synchronization, and bandwidth symmetry.</li>
<li>Experience with HPC fabrics like InfiniBand, RoCE v2, lossless Ethernet, or custom high-radix topologies and an understanding of how job placement, congestion management, and adaptive routing interact at scale.</li>
<li>Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.</li>
<li>Hands-on time with multi-cloud connectivity: cross-cloud peering, private interconnect products, and the billing models that come with them.</li>
<li>Experience building cost/chargeback systems for shared infrastructure, or FinOps exposure in a large cloud environment.</li>
</ul>
<p><strong>Representative Projects</strong></p>
<ul>
<li>Build a per-flow cost attribution pipeline that traces every byte of cross-region egress back to the team and workload that generated it</li>
<li>Design QoS policy for the private backbone that prevents bulk checkpoint transfers from starving inference traffic</li>
<li>Model whether it&#39;s cheaper to buy an additional 1.6Tb interconnect tranche or to re-route traffic through existing capacity</li>
<li>Instrument DCN fabric utilization with streaming telemetry and build the Grafana dashboards that become the team&#39;s source of truth for network observability</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>network engineering, network observability, telemetry pipelines, sFlow/IPFIX, gNMI streaming, eBPF host probes, BGP, ECMP, VXLAN/EVPN, QoS, DSCP, queuing, shaping, L1/optical basics, DWDM, coherent, LAGs, AWS, GCP, cloud networking, infrastructure-as-code, config management, automation, Python, Go, quantitative analysis, cost modeling, communication, SRE, cloud provider&apos;s networking team, cloud networking product team, AI/ML infrastructure traffic patterns, HPC fabrics, traffic engineering, multi-cloud connectivity, cost/chargeback systems, FinOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5177143008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1485bbbe-6b5</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>Join our X Money team as a talented Software Engineer to build a revolutionary global payment network serving over 600 million users. You will specialise in backend development, designing and optimising robust microservices for scalability, security, and reliability. Collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities: Develop backend services, APIs, and data models to support high-volume, multi-user environments. Work with iOS, Android &amp; Web client engineers to ship products. Design robust infrastructure and microservices for payments, transactions, growth, monetization, and engagement across platforms. Build and maintain fullstack features, including user dashboards, personalised experiences, content delivery, interactive tools, assessments, and real-time analytics. Lead architecture, scalability, and reliability decisions for high-concurrency, low-latency systems. Uphold engineering excellence via testing, monitoring, deployment, and secure data handling.</p>
<p>Basic Qualifications: Proficiency in distributed systems for high-scale, low-latency environments; languages like Rust, Go, Python &amp; Java, and high volume streaming systems. 2+ years of experience working on large scale consumer applications.</p>
<p>Preferred Skills and Experience: 5+ years of experience working on large scale consumer applications or early-mid stage startup experience as a founding engineer, emphasising rapid prototyping, user-centric design, and AI solutions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, Rust, Go, Python, Java, high volume streaming systems, rapid prototyping, user-centric design, AI solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5107958007</Applyto>
      <Location>New York, NY; Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5f44ff2b-0f4</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>Join our X Money team as a Software Engineer to build a revolutionary global payment network serving over 600 million users. You will specialise in backend development, designing and optimising robust microservices for scalability, security, and reliability. Collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop backend services, APIs, and data models to support high-volume, multi-user environments.</li>
<li>Work with iOS, Android &amp; Web client engineers to ship products.</li>
<li>Design robust infrastructure and microservices for payments, transactions, growth, monetization, and engagement across platforms.</li>
<li>Build and maintain fullstack features, including user dashboards, personalised experiences, content delivery, interactive tools, assessments, and real-time analytics.</li>
<li>Lead architecture, scalability, and reliability decisions for high-concurrency, low-latency systems.</li>
<li>Uphold engineering excellence via testing, monitoring, deployment, and secure data handling.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Proficiency in distributed systems for high-scale, low-latency environments; languages like Rust, Go, Python &amp; Java, and high volume streaming systems.</li>
<li>2+ years of experience working on large scale consumer applications.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, Python, Java, high volume streaming systems, distributed systems, low-latency environments, rapid prototyping, user-centric design, AI solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5108231007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ccc87f8c-abf</externalid>
      <Title>Member of Technical Staff – X Core Product</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff on the X Core Product team, you&#39;ll join a thirty-person team responsible for building and scaling X. You will be tasked with independently owning significant parts of the system end-to-end: from intuitive user interfaces to robust backend services, data infrastructure, and deep AI integrations.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop backend services, APIs, and data models to support high-volume, multi-user environments.</li>
<li>Work with iOS, Android &amp; Web client engineers to ship products.</li>
<li>Design robust infrastructure and microservices for payments, transactions, growth, monetization, and engagement across platforms.</li>
<li>Build and maintain fullstack features, including user dashboards, personalized experiences, content delivery, interactive tools, assessments, and real-time analytics.</li>
<li>Lead architecture, scalability, and reliability decisions for high-concurrency, low-latency systems.</li>
<li>Uphold engineering excellence via testing, monitoring, deployment, and secure data handling.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proficiency in distributed systems for high-scale, low-latency environments; languages like Rust, Go, Python &amp; Java, and high volume streaming systems.</li>
<li>2+ years of experience working on large scale consumer applications.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>5+ years of experience working on large scale consumer applications or early-mid stage startup experience as a founding engineer, emphasizing rapid prototyping, user-centric design, and AI solutions.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, Rust, Go, Python, Java, high volume streaming systems, rapid prototyping, user-centric design, AI solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5063929007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e2392ba0-1bc</externalid>
      <Title>Staff Engineer AI Agents</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI in property management. We build AI agents that act as property managers, handling the full spectrum of interactions with both prospects and current residents on behalf of our clients.</p>
<p>Our agents don’t just assist human workflows; they own them end-to-end, operating across leasing, collections and resident communications. Zuma has ambitions to continue expanding into adjacent work activities in tangential areas of property management.</p>
<p>This is a rare chance to shape the future of how an entire industry operates , not in theory, but in production, at scale, touching real customers and physical assets every day. At Zuma, human and AI agents work side by side, and you&#39;ll help define what that collaboration looks like at its best.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own E2E projects that cross all areas of software development including full stack web apps, agentic AI solutions across multiple work activities, extensive integrations with PMS and CRM systems, infrastructure, and internal tooling.</li>
</ul>
<ul>
<li>Architect, build, and deploy production AI agents using modern agent frameworks, owning the full lifecycle from design to reliability in production.</li>
</ul>
<ul>
<li>Define the technical patterns and standards for how software is built across the engineering org , you will be setting the playbook others follow.</li>
</ul>
<ul>
<li>Strengthen our core systems , including our onboarding/configuration system, integration frameworks, and AI performance analytics infrastructure.</li>
</ul>
<ul>
<li>Collaborate directly with the VPE and product leadership to translate product vision into delivery, making high-stakes technical trade-offs with confidence.</li>
</ul>
<ul>
<li>Own system reliability, observability, and continuous improvement , defining how we measure, monitor, and iterate on our agents and web products in production.</li>
</ul>
<ul>
<li>Work across the stack (backend services, LLM orchestration, integrations, data pipelines, frontends) to ship agents and products that are robust and scalable.</li>
</ul>
<ul>
<li>Tame legacy code and lay down new foundations , patterns and architecture you create will be inherited by the engineers who come after you.</li>
</ul>
<ul>
<li>Be a close partner to the product and operations teams, turning their domain needs into intelligent automated workflows without requiring domain expertise upfront.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience at a startup or high-growth company; comfort shipping fast and iterating in production.</li>
</ul>
<ul>
<li>AWS experience with IaC (Terraform) and comfort working with infrastructure / dev ops.</li>
</ul>
<ul>
<li>Background in building self-serve platforms or integration infrastructure.</li>
</ul>
<ul>
<li>Experience with workflow automation platforms or business process orchestration.</li>
</ul>
<ul>
<li>Experience with telephony integrations (Twilio or similar) and building voice-capable agents or chatbots across text and voice channels.</li>
</ul>
<ul>
<li>Familiarity with speech-to-text, text-to-speech, or real-time audio streaming pipelines in production AI systems.</li>
</ul>
<ul>
<li>Classical ML experience , supervised/unsupervised learning, feature engineering, model training and evaluation outside of LLM contexts.</li>
</ul>
<p><strong>Our Stack</strong></p>
<ul>
<li>Python, TypeScript/Node.js</li>
</ul>
<ul>
<li>OpenAI, Anthropic</li>
</ul>
<ul>
<li>LangGraph, OpenAI Agents SDK, custom orchestration layers</li>
</ul>
<ul>
<li>AWS, AWS ECS, PostgreSQL, Redis</li>
</ul>
<ul>
<li>RealPage, Entrata, Yardi, and other property management systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180-220 per year</Salaryrange>
      <Skills>Python, TypeScript, OpenAI, Anthropic, LangGraph, OpenAI Agents SDK, AWS, AWS ECS, PostgreSQL, Redis, RealPage, Entrata, Yardi, AWS IaC (Terraform), Infrastructure / Dev Ops, Self-serve platforms, Integration infrastructure, Workflow automation platforms, Business process orchestration, Telephony integrations (Twilio), Voice-capable agents or chatbots, Speech-to-text, Text-to-speech, Real-time audio streaming pipelines, Classical ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a company that builds AI agents for property management, with a flagship product that is a multichannel leasing agent.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/16961f6d-ab02-469d-8f99-3a68bf5a5026</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b1440e0a-37e</externalid>
      <Title>Livestream Hosts/Live Streamer, Live Commerce (Multiple) (Temporary Part-time)</Title>
      <Description><![CDATA[<p>We&#39;re looking for enthusiastic individuals to join our fast-paced live commerce team as Temporary Part-Time Livestream Hosts. All hosts must be able to be onsite in our Long Island City office. Breaking cards is a huge part of the role, and we&#39;re excited to bring people on board who are eager to dive into the action.</p>
<p>As a host, you&#39;ll be at the center of VeeFriends live broadcasts, engaging with our audience, providing entertainment, driving sales, and learning how to master multiple live streaming platforms like Whatnot, TikTok Live, eBay Live, and Fanatics Live. You&#39;ll also need to be adaptable and learn new software as we expand our live-streaming initiatives.</p>
<p>Responsibilities:</p>
<ul>
<li>Set up products, check video/audio, and ensure everything is ready for a seamless broadcast.</li>
<li>&quot;Break&quot; collectible cards and mystery items on screen for a high-anticipation audience.</li>
<li>Bring intense energy and charisma to the audience while presenting and explaining collectibles.</li>
<li>Interact with viewers in real time while simultaneously presenting and promoting VeeFriends products and educating the audience about VeeFriend&#39;s wider mission, core values, characters, and more.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>You&#39;re enthusiastic about both the VeeFriends brand and LiveCommerce as a channel opportunity.</li>
<li>You thrive in fast-paced environments and can quickly learn new tools and technologies.</li>
<li>A proven passion and expertise in collectibles - including Comics, TCGs, Apparel, Sports Memorabilia, &amp; Cards.</li>
<li>Passionate and curious about attracting and developing audience &amp; viewership growth and engagement.</li>
<li>Entertainment and community engagement are instrumental for sustainable revenue growth.</li>
</ul>
<p>Additional Information:</p>
<p>Kind Reminder: Only full-time and part-time permanent employees are eligible to enroll in VeeFriends benefit plans. We encourage you to apply even if you do not meet all of the requirements listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>part-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD 30-45 per hour</Salaryrange>
      <Skills>live streaming, collectibles, cards, TCGs, Apparel, Sports Memorabilia, entertainment, community engagement, Whatnot, TikTok Live, eBay Live, Fanatics Live, adaptable, quick learner</Skills>
      <Category>Other</Category>
      <Industry>Entertainment</Industry>
      <Employername>VeeFriends</Employername>
      <Employerlogo>https://logos.yubhub.co/veefriends.com.png</Employerlogo>
      <Employerdescription>VeeFriends is an entertainment company that creates a universe of 251 characters, combining storytelling, content, collectibles, and community-driven experiences.</Employerdescription>
      <Employerwebsite>https://veefriends.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/VeeFriends/b7018787-7932-4ada-8341-0577e9952093</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b0b187c5-575</externalid>
      <Title>VP, Artist and Label Marketing</Title>
      <Description><![CDATA[<p><strong>Job Title: VP, Artist and Label Marketing</strong></p>
<p><strong>Department: Administration</strong></p>
<p><strong>Job Description:</strong></p>
<p>UnitedMasters is building a marketplace that connects artists, brands, and fans - empowering artists to earn and grow. UnitedMasters has taken the bold step of building a music distribution service that, radically, puts artists first - disrupting the legacy music business by letting up-and-coming artists distribute their music directly to fans through streaming services while maintaining ownership of their master recording rights and up to 100% of royalties.</p>
<p>Through the combination of UnitedMasters&#39; music distribution platform and its deep ties to brands, UnitedMasters enables independent artists and change-makers to grow and earn unlike any other platform.</p>
<p>The UnitedMasters team is made up of musicians, marketers, engineers, and storytellers with backgrounds from YouTube, SoundCloud, Pandora, Facebook, Uber, Dropbox, Complex, VICE, and more. We work hand in hand with the award-winning creative teams that forge those innovative partnerships at Translation (our in-house creative advertising agency).</p>
<p><strong>What&#39;s the Role</strong></p>
<p>UnitedMasters is seeking a Vice President, Artist &amp; Label Marketing to lead marketing strategy and execution across our exclusive artist roster and growing label services business. Some of our exclusive artists include BigXThaPlug, Brent Faiyaz, and FloyyMenor. This is a senior leadership role responsible for defining the marketing vision for artists and labels while building and leading a high-performing team.</p>
<p>This role blends long-term strategy with day-to-day execution. You will set the roadmap, lead major campaigns and releases, develop talent, and partner closely with artists, managers, and cross-functional teams to drive impact and results. This role sets the creative bar for UnitedMasters&#39; artist and label marketing by defining what &#39;great&#39; looks like across brand, storytelling, and campaign execution.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the marketing vision and execution for UnitedMasters&#39; exclusive artist roster and label services business including BigXThaPlug, Brent Faiyaz, and FloyyMenor</li>
<li>Set and uphold a high creative standard across all campaigns, ensuring each release reflects strong storytelling, cultural relevance, and artistic integrity</li>
<li>Develop long-term strategies while balancing excellence and accountability in day-to-day execution</li>
<li>Serve as a strategic thought partner to senior leadership on artist growth, brand positioning, and market opportunity</li>
</ul>
<p><strong>Drive Marketing Excellence Across Releases</strong></p>
<ul>
<li>Oversee campaign strategy, release planning, and marketing execution across all exclusive artists</li>
<li>Ensure excellence in operations including timelines, deliverables, and cross-functional coordination</li>
<li>Build and manage project budgets in partnership with Finance and maintain accountability</li>
<li>Partner with Creative, Digital, Commerce, A&amp;R, Publicity, and Sync teams to ensure seamless execution</li>
<li>Maintain campaign visibility and accountability through reporting, updates, and performance tracking</li>
</ul>
<p><strong>Lead, Build &amp; Inspire a Team</strong></p>
<ul>
<li>Lead and invest in growing a high-performing Artist &amp; Label Marketing organization</li>
<li>Mentor, and develop marketing talent</li>
<li>Foster a culture of creativity, accountability, and high performance</li>
<li>Promote collaboration without sacrificing individual ownership or excellence</li>
</ul>
<p><strong>Partner with Artists, Managers &amp; External Stakeholders</strong></p>
<ul>
<li>Serve as a senior marketing advisor to artists and their teams</li>
<li>Present strategies clearly and persuasively to artists, managers, and partners</li>
<li>Build trust-based relationships across the roster</li>
<li>Engage in pitching and strategic conversations as needed</li>
</ul>
<p><strong>Collaborate Across the Enterprise</strong></p>
<ul>
<li>Partner closely with Digital, Commerce, Brand Partnerships, Product, and International teams (Brazil)</li>
<li>Collaborate with Brand Partnerships, Sync, and Product teams to unlock additional artist opportunities</li>
<li>Develop integrated campaigns and content strategies that extend beyond streaming</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8-12+ years of experience in music marketing across artist and/or label environments</li>
<li>NY-Based or willing to relocate</li>
<li>Senior leadership experience building and managing teams</li>
<li>Track record of developing and executing successful marketing strategies</li>
<li>Experience working cross-functionally at an executive level</li>
<li>Strong operational and financial acumen</li>
<li>Experience in fast-moving, high-growth organizations preferred</li>
</ul>
<p><strong>Salary Hiring Range:</strong></p>
<p>$200,000 - $260,000 + bonus eligibility</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200,000 - $260,000 + bonus eligibility</Salaryrange>
      <Skills>music marketing, artist and label marketing, marketing strategy, campaign execution, long-term strategy, day-to-day execution, team management, talent development, cross-functional coordination, project budgeting, accountability, performance tracking, collaboration, individual ownership, excellence, creativity, high performance, trust-based relationships, pitching, strategic conversations, integration, content strategy, streaming, digital marketing, commerce, brand partnerships, product, international teams, Brazil, fast-moving, high-growth, operational, financial acumen</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>UnitedMasters</Employername>
      <Employerlogo>https://logos.yubhub.co/unitedmasters.com.png</Employerlogo>
      <Employerdescription>UnitedMasters is a marketplace that connects artists, brands, and fans, enabling artists to earn and grow. It offers a music distribution service that allows artists to distribute their music directly to fans through streaming services while maintaining ownership of their master recording rights and up to 100% of royalties.</Employerdescription>
      <Employerwebsite>https://unitedmasters.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/unitedmasterstranslation/jobs/8345671002</Applyto>
      <Location>Brooklyn, New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4d14bef3-77e</externalid>
      <Title>Staff Software Engineer - AI Applications</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life.</p>
<p>We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use. Plaid&#39;s network covers 12,000 financial institutions across the US, Canada, UK and Europe.</p>
<p>The AI Applications Team You will have the opportunity to join as one of the founding members of this newly formed team that is dedicated to consolidating and rapidly scaling our successful bets so far, and grow with the team in our quest to accelerate Plaid&#39;s transformation into an AI-first company.</p>
<p>In this role you will lead projects that enable and scale our business with our largest AI customers and partners, starting with personal finance use cases and expanding into many others; examples include:</p>
<ul>
<li>Develop and evolve the preferred integration pattern for Plaid with AI providers - from API adaptations to building the official Plaid MCP Servers, and beyond</li>
</ul>
<ul>
<li>Redefine how Plaid&#39;s consumer link experience embed into conversational interfaces in the most seamless way</li>
</ul>
<ul>
<li>Architect the trust layer for the future of agentic commerce that will become the industry standard</li>
</ul>
<p>Additionally you will be expected to scale and extend our existing successful bets on AI-powered customer experience; examples include:</p>
<ul>
<li>Make the next step-function improvement in our homegrown customer support agent</li>
</ul>
<ul>
<li>Land our multi-turn and multi-agent system that powers a truly delightful experience for our customers; define how to scalably run offline evaluation for complex multi-turn open-ended tasks; research and prototype how Human-In-The-Loop - Reinforcement Learning (RLHF) can power an insights flywheel; pioneer the architecture for customer-specific long-term memory, etc.</li>
</ul>
<ul>
<li>Extend our agentic system to support other critical parts of the customer journey, starting with areas with the highest ROI - top-of-funnel product recommendation, customer onboarding and risk diligence, customer activation and assistance for faster productionization, as well as upselling and cross-selling of Plaid products</li>
</ul>
<p>You will have a front row seat to all the latest industry developments. Over time, with the skills and experience you develop and hone on this team, you can become an influential voice in defining where AI &lt; Fintech will be heading longer term.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build across the stack. Design, develop, and maintain scalable backend services and APIs, as well as intuitive, high-quality frontend applications that bring those systems to life.</li>
</ul>
<ul>
<li>Work with other AI engineers, software engineers and machine learning engineers to architect, design and implement GenAI-powered products and features</li>
</ul>
<ul>
<li>Collaborate across functions to understand user needs, propose and implement AI-powered solutions where they’re expected to have the highest impact</li>
</ul>
<ul>
<li>Design and execute rapid experiments to push the boundaries on potential business impact from emerging AI capabilities, with a focus on minimal viable testing approaches</li>
</ul>
<ul>
<li>Balance creative exploration of possibilities with rigorous evaluation of technical feasibility, product potential and business impact</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building backend services and working with microservices or service-oriented architectures</li>
</ul>
<ul>
<li>Strong working knowledge of HTML, CSS, JavaScript, and modern frontend frameworks or libraries, with comfort building user-facing experiences</li>
</ul>
<ul>
<li>Hands-on experience working with LLMs to build products and shipping them to product with iterating with real user feedback - including but not limited to:</li>
</ul>
<ul>
<li>Prompt engineering</li>
</ul>
<ul>
<li>Fine-tuning</li>
</ul>
<ul>
<li>Retrieval augmented generation (RAG)</li>
</ul>
<ul>
<li>Semantic search</li>
</ul>
<ul>
<li>Vector database and embedding models</li>
</ul>
<ul>
<li>Agent orchestration framework</li>
</ul>
<ul>
<li>Evaluation and monitoring framework of open-ended tasks</li>
</ul>
<ul>
<li>Streaming and SSE</li>
</ul>
<ul>
<li>Common UX and design patterns for GenAI-powered products</li>
</ul>
<ul>
<li>Strong debugging and monitoring experience for production systems</li>
</ul>
<ul>
<li>Ability to deeply understand customer and user needs through user research and rapid experimentation - be your own technical PM</li>
</ul>
<ul>
<li>Ability to balance divergent thinking (exploring possibilities) with convergent thinking (evaluating feasibility), which is critical for driving 0 -&gt;1 projects</li>
</ul>
<ul>
<li>Extremely curious and passionate about working in GenAI applications space</li>
</ul>
<p><strong>Nice-to-Haves</strong></p>
<ul>
<li>Experience training and/or serving ML models in production, or fine-tuning LLMs for domain-specific use cases</li>
</ul>
<ul>
<li>Comfortable operating in privacy/PII-sensitive environments and applying compliance mitigations</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$228,360-$369,800 per year</Salaryrange>
      <Skills>backend services, microservices, service-oriented architectures, HTML, CSS, JavaScript, modern frontend frameworks, LLMs, prompt engineering, fine-tuning, retrieval augmented generation, semantic search, vector database, embedding models, agent orchestration framework, evaluation and monitoring framework, streaming, SSE, UX and design patterns, debugging, monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and experiences for developers to create their own products. It was founded in 2013 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/a6bf6eeb-6486-4e45-a3b2-e712f32523d3</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>78a9b8f2-81c</externalid>
      <Title>Senior Software Engineer - Data Infrastructure</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>
<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>
<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>
<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>
<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience</li>
<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>
<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>
<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>
<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>
<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid builds tools and experiences that thousands of developers use to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>586b9fef-509</externalid>
      <Title>Senior Software Engineer - Network Enablement (Applied ML)</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>On this team, you will build and operate the ML infrastructure and product services that enable trust and intelligence across Plaid&#39;s network. You&#39;ll own feature engineering, offline training and batch scoring, online feature serving, and real-time inference so model outputs directly power partner-facing fraud &amp; trust products and bank intelligence features.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Embed model inference into Network Enablement product flows and decision logic (APIs, feature flags, backend flows).</li>
<li>Define and instrument product + ML success metrics (fraud reduction, retention lift, false positives, downstream impact).</li>
<li>Design and run experiments and rollout plans (backtesting, shadow scoring, A/B tests, feature-flagged releases) to validate product hypotheses.</li>
<li>Build and operate offline training pipelines and production batch scoring for bank intelligence products.</li>
<li>Ship and maintain online feature serving and low-latency model inference endpoints for real-time partner/bank scoring.</li>
<li>Implement model CI/CD, model/version registry, and safe rollout/rollback strategies.</li>
<li>Monitor model/data health: drift/regression detection, model-quality dashboards, alerts, and SLOs targeted to partner product needs.</li>
<li>Ensure offline and online parity, data lineage, and automated validation / data contracts to reduce regressions.</li>
<li>Optimize inference performance and cost for real-time scoring (batching, caching, runtime selection).</li>
<li>Ensure fairness, explainability and PII-aware handling for partner-facing ML features; maintain auditability for compliance.</li>
<li>Partner with platform and cross-functional teams to scale the ML/data foundation (graph features, sequence embeddings, unified pipelines).</li>
<li>Mentor engineers and document team standards for ML productization and operations.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Must-haves:</li>
<li>Strong software engineering skills including systems design, APIs, and building reliable backend services (Go or Python preferred).</li>
<li>Production experience with batch and streaming data pipelines and orchestration tools such as Airflow or Spark.</li>
<li>Experience building or operating real-time scoring and online feature-serving systems, including feature stores and low-latency model inference.</li>
<li>Experience integrating model outputs into product flows (APIs, feature flags) and measuring impact through experiments and product metrics.</li>
<li>Experience with model lifecycle and operations: model registries, CI/CD for models, reproducible training, offline &amp; online parity, monitoring and incident response.</li>
<li>Nice to have:</li>
<li>Experience in fraud, risk, or marketing intelligence domains.</li>
<li>Experience with feature-store products (Tecton / Chronon / Feast / internal) and unified pipelines.</li>
<li>Experience with graph frameworks, graph feature engineering, or sequence embeddings.</li>
<li>Experience optimizing inference at scale (Triton/ONNX/quantization, batching, caching).</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>software engineering, systems design, APIs, backend services, Go, Python, batch and streaming data pipelines, orchestration tools, Airflow, Spark, real-time scoring, online feature-serving systems, feature stores, low-latency model inference, model outputs, product flows, experiments, product metrics, model lifecycle, operations, model registries, CI/CD, reproducible training, offline &amp; online parity, monitoring, incident response, fraud, risk, marketing intelligence, feature-store products, unified pipelines, graph frameworks, graph feature engineering, sequence embeddings, inference at scale, Triton, ONNX, quantization, batching, caching</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a technology company that powers the tools millions of people rely on to live a healthier financial life. The company has a presence in multiple countries and works with thousands of companies.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/43b1374d-5c5e-4b63-b710-a95e3cb76bbe</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7d9b8590-1c7</externalid>
      <Title>Senior Software Engineer - AI Applications</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build across the stack. Design, develop, and maintain scalable backend services and APIs, as well as intuitive, high-quality frontend applications that bring those systems to life.</li>
<li>Work with other AI engineers, software engineers and machine learning engineers to architect, design and implement GenAI-powered products and features</li>
<li>Collaborate across functions to understand user needs, propose and implement AI-powered solutions where they’re expected to have the highest impact</li>
<li>Design and execute rapid experiments to push the boundaries on potential business impact from emerging AI capabilities, with a focus on minimal viable testing approaches</li>
<li>Balance creative exploration of possibilities with rigorous evaluation of technical feasibility, product potential and business impact</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Experience building backend services and working with microservices or service-oriented architectures</li>
<li>Strong working knowledge of HTML, CSS, JavaScript, and modern frontend frameworks or libraries, with comfort building user-facing experiences</li>
<li>Strong software engineering fundamentals, including system design and API development</li>
<li>Hands-on experience building and shipping LLM-powered products, iterating with real user feedback</li>
<li>Practical experience with prompt engineering, fine-tuning, RAG, semantic search (vector databases and embeddings), agent orchestration frameworks, and evaluation/monitoring of open-ended tasks</li>
<li>Experience building GenAI-powered product experiences, including streaming/SSE and common UX patterns</li>
<li>Strong debugging and production monitoring experience</li>
<li>Ability to deeply understand customer needs through user research and rapid experimentation; comfortable operating as a technical PM when needed</li>
<li>Ability to balance divergent exploration with pragmatic execution, especially in 0 to 1 environments</li>
<li>Deep curiosity and passion for building GenAI applications</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience training and deploying ML models in production, including fine-tuning LLMs for domain-specific use cases</li>
<li>Comfortable operating in privacy- and PII-sensitive environments, with experience applying appropriate compliance and data protection controls</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$209,880-$315,480 per year</Salaryrange>
      <Skills>backend services, microservices, service-oriented architectures, HTML, CSS, JavaScript, modern frontend frameworks, LLM-powered products, prompt engineering, fine-tuning, RAG, semantic search, agent orchestration frameworks, evaluation/monitoring of open-ended tasks, GenAI-powered product experiences, streaming/SSE, common UX patterns, debugging, production monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and experiences for developers to create their own products. It has a network covering 12,000 financial institutions across the US, Canada, UK and Europe.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/0afb2b7b-7e54-40e4-a8f6-642ac1df00f6</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7b750523-8ff</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to lead the technical strategy and implementation of our enterprise data architecture, governance foundations, and analytics enablement tooling.</p>
<p>In this role, you will be the primary engineering counterpart to the Senior Product Manager for Data Enablement &amp; Governance, jointly shaping the roadmap for enterprise analytics, shared definitions, and the tools that help Omada answer questions faster and more reliably.</p>
<p>You will design and evolve core data products, define patterns and standards used across the company, and drive the technical execution of initiatives that ensure our metrics, reports, and data products are scalable, governed, and trustworthy.</p>
<p>This is a high-impact, cross-functional Staff role working across Data Engineering, Data Science, Analytics, Product, IT, and business leaders.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>Enterprise Data Architecture</strong></p>
<ul>
<li>Own the vision and technical roadmap for Omada&#39;s enterprise data architecture, spanning ingestion, storage, modeling, and serving layers for analytics and applied statistics use cases.</li>
<li>Design, implement, and evolve scalable, secure, and cost-efficient data solutions (datalakes, warehouses, marts, semantic layers) that support governed, cross-functional analytics and self-service.</li>
<li>Define and socialize architectural patterns, data contracts, and integration standards used by data and product teams across the organization.</li>
<li>Anticipate future needs (e.g., new product lines, new modalities, AI/ML workloads) and drive proactive architectural changes rather than reacting to incidents or point-in-time requests.</li>
</ul>
<p><strong>Data Modeling, Quality, and Governance Foundations</strong></p>
<ul>
<li>Lead the design of logical and physical data models to support enterprise metrics, dashboards, and ad hoc analytics, with a focus on reusability and clear ownership.</li>
<li>Implement robust data quality, validation, and monitoring frameworks that underpin trusted “single source of truth” definitions for core concepts (e.g., active member, MAU, GLP-1 member).</li>
<li>Partner with the Senior Product Manager, Data Enablement &amp; Governance to translate governance decisions (definitions, ownership, change-management processes) into concrete technical implementations in the data platform.</li>
<li>Set standards and review mechanisms to ensure new pipelines, marts, and reports align with enterprise definitions and governance policies.</li>
<li>Continuously improve performance, scalability, and cost-efficiency of data workflows and storage; lead deep dives and remediation for complex production issues.</li>
</ul>
<p><strong>Enterprise Data Products Lifecycle</strong></p>
<ul>
<li>In close partnership with the Senior PM, define and deliver core, reusable data products (e.g., engagement, clinical, financial, client, care delivery datasets) that power dashboards, reporting, and self-service analytics.</li>
<li>Co-Architect and implement technical foundations for AI-assisted analytics tools, governed semantic layers, and reporting applications that make analysts and business users more efficient.</li>
<li>Partner with Product and Engineering teams owning tools like Amplitude, Tableau, and internal reporting tools to ensure consistent instrumentation, mapping to enterprise definitions, and scalable access patterns.</li>
<li>Translate business and product requirements into resilient schemas, data services, and interfaces that are usable, maintainable, and auditable.</li>
<li>Ensure production data delivery meets defined SLAs and supports downstream BI, reporting apps, and applied statistics workloads.</li>
<li>Play a key role in cross-functional forums (e.g., Data Governance Committee, analytics communities) as the technical voice for feasibility, risk, and long-term platform health.</li>
</ul>
<p><strong>Technical Leadership, Mentorship, and Culture</strong></p>
<ul>
<li>Lead large, multi-team technical initiatives,from design to implementation and rollout,setting a high bar for design docs, reviews, and execution quality.</li>
<li>Mentor senior and mid-level engineers, elevating the team’s skills in data modeling, pipeline design, governance, and platform thinking.</li>
<li>Help shape playbooks for how product squads and spokes engage with central data teams on new metrics, data products, and applied stats projects.</li>
<li>Partner closely with Analytics, Data Science, Product, and business leaders to ensure data architecture and governance decisions are aligned with company OKRs and measurable business value.</li>
<li>Proactively identify complexity, duplication, and fragility in existing systems; drive simplification and standardization with sustainable solutions.</li>
<li>Model Omada’s values in day-to-day work, fostering a culture of trust, context-seeking, bold thinking, and high-impact delivery.</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>8+ years of experience building, maintaining, and orchestrating scalable data platforms and high-quality production pipelines, including significant experience in analytics or warehousing environments.</li>
<li>Demonstrated Staff-level impact: leading cross-team technical initiatives, making architectural decisions that shaped a multi-year roadmap, and influencing stakeholders beyond your immediate team.</li>
<li>Deep experience with cloud data ecosystems (e.g., AWS) and modern data warehouses (e.g., Redshift, Snowflake, BigQuery), including MPP query optimization.</li>
<li>Strong background in data modeling for OLTP and OLAP, and designing reusable data products for BI, reporting, and advanced analytics.</li>
<li>Hands-on experience implementing data quality, observability, and governance frameworks, ideally in a regulated or PHI/PII-sensitive environment.</li>
<li>Experience partnering with Product Management and Analytics to define and deliver platform capabilities, not just point solutions.</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Strong proficiency in SQL (analytical and performance-tuned) and experience with relational and MPP databases.</li>
<li>Proficiency in at least one modern programming language used in data engineering (e.g., Python, Java, Scala) and comfort applying software engineering best practices (testing, CI/CD, code review).</li>
<li>Experience with workflow orchestration and data integration tools (e.g., Airflow) and event-driven or streaming patterns where appropriate.</li>
<li>Familiarity with BI and analytics tools (e.g., Tableau, Amplitude, or similar) and how they integrate with governed data layers.</li>
<li>Experience with data governance concepts (ownership, lineage, definitions, access controls) and their technical implementation in a modern data stack.</li>
<li>Familiarity with AI tools for development.</li>
</ul>
<p><strong>Communication &amp; Working Style:</strong></p>
<ul>
<li>Excellent communication and collaboration skills, with the ability to convey complex technical concepts to non-technical stakeholders.</li>
<li>Highly self-directed and comfortable operating in ambiguous, cross-functional problem spaces, creating clarity and direction where none exists.</li>
<li>Strong sense of ownership and bias for impact; you care about outcomes for members, customers, and internal users, not just elegant systems.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you recharge</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Cloud data ecosystems, Modern data warehouses, MPP query optimization, Data modeling, Data quality, Data governance, Workflow orchestration, Data integration, Event-driven or streaming patterns, BI and analytics tools, AI tools for development</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides digital therapeutics for chronic disease management.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7753330</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3048ccd4-7de</externalid>
      <Title>Data Analyst</Title>
      <Description><![CDATA[<p>We are seeking a Data Analyst to join our growing data team. As a Data Analyst at LayerZero, you will be at the forefront of shaping a rich data foundation for a company making a real impact in the web3 space. You will work closely with teams and leaders to uncover insights, drive decision-making, and fuel our next-generation products and services.</p>
<p>The successful candidate will dive headfirst into the world of crypto data, exploring on-chain wallets and contracts, block and transaction data, insights from in-house systems, and third-party intelligence. Your mission will be to combine these diverse datasets into rich, actionable data products for a broad group of stakeholders.</p>
<p>Key responsibilities include:
Leveraging and expanding our ever-growing Kimball dimensional model.
Writing SQL to create and expand insights in our in-house reporting solutions.
Collaborating with stakeholders across the organization to conduct ad-hoc explorations and analytics.
Being a key owner of data quality, building out insights that serve the data team itself.
Composing pipelines by writing SQL code to clean, combine, refine, and aggregate data into the insights the organization needs.
Collaborating on new datasets to ingest into our Snowflake data warehouse, working closely with data engineers on your team.
Not afraid of pushing code that supports tens of billions of dollars in daily transaction volume.</p>
<p>We are looking for someone with previous data analyst experience, likely with a bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Physics or related field, but we also consider and highly value equivalent practical experience.</p>
<p>Required skills include strong SQL knowledge and experience, proven track record in data modeling, statistics, and analytics, experience working with a broad range of stakeholders, and strong convictions weakly held.
Nice to have skills include experience with general programming, experience with Snowflake, experience building DAG-based data pipelines, experience with streaming real-time data pipelines, previous experience with blockchain technologies, smart contracts, and decentralized finance, experience with Kimball dimensional modeling, and working on a mid-to-large scale data stacks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, statistics, analytics, Snowflake, Kimball dimensional modeling, general programming, DAG-based data pipelines, streaming real-time data pipelines, blockchain technologies, smart contracts, decentralized finance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>LayerZero</Employername>
      <Employerlogo>https://logos.yubhub.co/layerzero.com.png</Employerlogo>
      <Employerdescription>LayerZero is a company founded in 2021, creating a community of cross-chain developers.</Employerdescription>
      <Employerwebsite>https://layerzero.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/layerzerolabs/jobs/5787956004</Applyto>
      <Location>Vancouver, BC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5579e8fb-227</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Build human-in-the-loop review workflows for AI decisions that require oversight , particularly for high-value financial actions.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM pipeline engineering, RAG architecture, ML system operation, Python programming, AI orchestration framework, ML model serving infrastructure, Observability tooling, Fintech experience, Prompt evaluation frameworks, ML lifecycle management tools, Real-time data streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform, serving over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/2f00206f-6091-4eed-8b5f-1325afdbfe30</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2a88ee59-dc6</externalid>
      <Title>Full Stack Engineer (Serverless)</Title>
      <Description><![CDATA[<p>We&#39;re building the fastest and most scalable infrastructure for AI inference. As a Full Stack Engineer on Serverless, you will build the core product across frontend and backend that powers our Serverless platform. This is a deeply product-focused role where you will work side-by-side with Product and Infrastructure to design and ship reusable, scalable systems that enterprise customers rely on in production every day.</p>
<p>You will be a foundational technical owner of our Serverless product as it scales to thousands of enterprise customers, with real responsibility, autonomy, and impact. This is a chance to help build a new product vertical from the ground up inside a company that is already scaling at rocket-ship speed.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and maintaining core Serverless UI features (dashboards, logs, observability, configuration, usage)</li>
<li>Designing and implementing backend APIs that power the Serverless product experience</li>
<li>Improving performance, reliability, and scalability of customer-facing systems</li>
<li>Working closely with Infrastructure to ensure product features align with platform capabilities</li>
<li>Owning features end-to-end, from design through production and iteration</li>
</ul>
<p>We&#39;re looking for a strong experience working across both frontend and backend, proficiency with TypeScript, Python, Postgres, and Next.js, and experience owning features end-to-end in production systems. Ability to context switch between UI, backend, and performance work, product-minded engineer who values clean abstractions and long-term maintainability, comfortable working in a fast-moving, low-process environment.</p>
<p>Nice to have experience building developer platforms or infrastructure-adjacent products, familiarity with observability tooling (logging, metrics, tracing) in production environments, background in distributed systems, container orchestration, or cloud-native architectures, experience with real-time systems, streaming logs, or high-throughput data pipelines, exposure to technologies such as Kubernetes, Prometheus, Datadog, gRPC, or similar systems, entrepreneurial mindset and strong ownership mentality.</p>
<p>We offer interesting and challenging work, competitive salary and equity, a lot of learning and growth opportunities, visa sponsorship and relocation assistance, health, dental, and vision insurance, regular team events and offsite.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $230,000 + equity + comprehensive benefits package</Salaryrange>
      <Skills>TypeScript, Python, Postgres, Next.js, serverless, backend APIs, frontend development, observability tooling, distributed systems, container orchestration, cloud-native architectures, real-time systems, streaming logs, high-throughput data pipelines, Kubernetes, Prometheus, Datadog, gRPC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal builds infrastructure for AI inference and has scaled to handle tens of millions of requests per day.</Employerdescription>
      <Employerwebsite>https://www.fal.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4112697009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9cdc0a4d-95f</externalid>
      <Title>Staff Software Engineer, Stream Compute</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Stream Compute team at Stripe. As a key member of this team, you will help define and deliver the next generation of Stripe&#39;s Flink-first stream compute infrastructure. This is a unique opportunity to work on some of the hardest problems in operating Flink in production, such as state management, exactly-once processing, performance isolation, and automated recovery.</p>
<p>Your primary responsibilities will include designing, building, and operating stream compute infrastructure with Apache Flink at the center, partnering with product and platform teams across Stripe to understand requirements, unblocking Flink adoption, and improving how stream processing infrastructure is used end-to-end. You will also define and implement operational best practices to improve resilience and reliability at scale, drive fleet-level automation and standardization, and lead initiatives that raise the bar on Flink availability and state durability.</p>
<p>To succeed in this role, you should have experience as a technical lead for team(s) working on distributed systems, including scaling them in fast-moving environments. You should also have hands-on experience with big data technologies such as Flink, Spark, Kafka, Pulsar, or Pinot, and experience developing, maintaining, and debugging distributed systems built with open source tools. Additionally, you should have strong software engineering skills and a passion for Big Data Distributed Systems, as well as the ability to write high-quality code in programming languages like Go, Java, Scala, etc.</p>
<p>If you&#39;re interested in joining our team and contributing to the development of our stream compute infrastructure, please don&#39;t hesitate to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Flink, Kafka, Temporal, AWS services, Distributed systems, Big data technologies, Software engineering, Go, Java, Scala, Streaming infrastructure, Real-time processing frameworks, Control planes, Open source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7767063</Applyto>
      <Location>San Francisco, Seattle, New York, Toronto</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>8b80757a-1cf</externalid>
      <Title>AV Builds and Operations</Title>
      <Description><![CDATA[<p>Join the Audio Visual team at Stripe, responsible for delivering best-in-class collaboration experiences. As an AV Builds and Operations professional, you will research and develop new AV experiences and standards, produce and operate events, and manage projects ranging from office builds to technology rollouts.</p>
<p>Responsibilities:</p>
<ul>
<li>Research and develop AV technology standards across our offices</li>
<li>Serve as the highest point of escalation internally for AV issues and incidents of varying complexity and criticality</li>
<li>Maintain best-in-class in-office collaboration support through cross-functional communication and partnership</li>
<li>Contribute directly to the upleveling and maturation of our services and process, keeping in mind scalability, efficiency, and organization</li>
<li>Work directly on and support when needed the production and operation of our highest-tiered internal events, such as our Company All Hands</li>
<li>Ensure the AV experience at Stripe is equitable and consistent across the globe by leveraging remote-managed technologies</li>
<li>Own and manage the design, procurement, installation, and sign-off of new office builds and retrofits</li>
<li>Develop run books and document processes to ensure repeat success for operational support teams and fellow Stripes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of relevant experience in an enterprise environment scaling build and deployment processes</li>
<li>Prior experience delivering, developing, and writing complex technical solutions</li>
<li>High-level knowledge of AV hardware &amp; software infrastructures</li>
<li>Working consistently and effectively to collaborate within a global team</li>
<li>Strong understanding of streaming and event production workflows and the necessary protocols</li>
<li>Ability to design and support high-complexity production spaces as well as standardized conference room systems</li>
<li>Experience with managing multiple vendors globally to ensure multiple overlapping projects are delivered on time</li>
<li>Strong problem-solving skills using a solution-oriented mindset and able to drive issues to resolution</li>
<li>Experience being a catalyst for positive change in a global organization</li>
<li>Experience empowering the team and leading with empathy and integrity</li>
<li>Precise communication skills, both written and verbal</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>CTS and/or CTS-D certification</li>
<li>Q-Sys Level 1+ certification</li>
<li>Dante levels 1–3 certification</li>
<li>Experience with QSC and Crestron programming</li>
<li>Project Management experience</li>
<li>AV managed services oversight inline with ITIL 4 frameworks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AV hardware &amp; software infrastructures, Streaming and event production workflows, Complex technical solutions, Global team collaboration, Problem-solving skills, CTS and/or CTS-D certification, Q-Sys Level 1+ certification, Dante levels 1–3 certification, QSC and Crestron programming, Project Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7675097</Applyto>
      <Location>NYC</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>2c9b9b51-aad</externalid>
      <Title>AV Builds and Operations</Title>
      <Description><![CDATA[<p>Job Title: AV Builds and Operations</p>
<p>We are seeking an experienced professional to join our Audio Visual team at Stripe. As a key member of this team, you will be responsible for researching and developing new AV experiences and standards, event production and operation, and project management ranging from office builds to technology rollouts.</p>
<p>Responsibilities:</p>
<ul>
<li>Research and develop AV technology standards across our offices</li>
<li>Serve as the highest point of escalation internally for AV issues and incidents of varying complexity and criticality</li>
<li>Maintain best-in-class in-office collaboration support through cross-functional communication and partnership</li>
<li>Contribute directly to the upleveling and maturation of our services and process, keeping in mind scalability, efficiency, and organization</li>
<li>Work directly on and support when needed the production and operation of our highest-tiered internal events, such as our Company All Hands</li>
<li>Ensure the AV experience at Stripe is equitable and consistent across the globe by leveraging remote-managed technologies</li>
<li>Own and manage the design, procurement, installation, and sign-off of new office builds and retrofits</li>
<li>Develop run books and document processes to ensure repeat success for operational support teams and fellow Stripes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of relevant experience in an enterprise environment scaling build and deployment processes</li>
<li>Prior experience delivering, developing, and writing complex technical solutions</li>
<li>High-level knowledge of AV hardware &amp; software infrastructures</li>
<li>Working consistently and effectively to collaborate within a global team</li>
<li>Strong understanding of streaming and event production workflows and the necessary protocols</li>
<li>Ability to design and support high-complexity production spaces as well as standardized conference room systems</li>
<li>Experience with managing multiple vendors globally to ensure multiple overlapping projects are delivered on time</li>
<li>Strong problem-solving skills using a solution-oriented mindset and able to drive issues to resolution</li>
<li>Experience being a catalyst for positive change in a global organization</li>
<li>Experience empowering the team and leading with empathy and integrity</li>
<li>Precise communication skills, both written and verbal</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>CTS and/or CTS-D certification</li>
<li>Q-Sys Level 1+ certification</li>
<li>Dante levels 1–3 certification</li>
<li>Experience with QSC and Crestron programming</li>
<li>Project Management experience</li>
<li>AV managed services oversight inline with ITIL 4 frameworks</li>
</ul>
<p>Please note that the preferred qualifications are a bonus, not a requirement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AV hardware &amp; software infrastructures, Streaming and event production workflows, Complex technical solutions, Project management, Global team collaboration, CTS and/or CTS-D certification, Q-Sys Level 1+ certification, Dante levels 1–3 certification, QSC and Crestron programming, AV managed services oversight inline with ITIL 4 frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7657130</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>56dc9a51-e66</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. It is a mid-size player with a supportive, entrepreneurial spirit.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>