<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>5163623e-08f</externalid>
      <Title>R&amp;D Specialist III</Title>
      <Description><![CDATA[<p>As an R&amp;D Specialist III, you will work closely with the Genomics &amp; Sequencing team to deliver daily and weekly sequencing targets under defined timelines. Your primary responsibilities will include operating Illumina and other NGS instruments with high technical accuracy, executing automated and manual sequencing workflows, and troubleshooting instrument and process issues. You will also work primarily in the laboratory (approximately 80%) and participate in weekend sequencing run coverage. Additionally, you will follow all SOPs, quality standards, safety requirements, and documentation practices while contributing to workflow optimization and continuous improvement.</p>
<p>You will coordinate and prioritize workload within a large team, provide on-the-job support to contingent staff, and communicate run performance, KPIs, and operational updates to stakeholders. You will work closely with hub specialists and scientists as well as supporting functional spaces within the Precision Genomics program.</p>
<p>To be successful in this role, you will possess a Bachelor&#39;s degree and knowledge of next-generation sequencing (NGS) technologies, platforms, and associated workflows. You will also have strong molecular biology foundation with the ability to troubleshoot workflows, interpret data, and apply sound scientific judgment. Additionally, you will have demonstrated good laboratory technique, including accurate pipetting, contamination control, and consistent execution of SOPs.</p>
<p>Preferred qualifications include a Master&#39;s degree in Biology, Chemistry, or a related field with 1+ year of relevant experience, or a Bachelor&#39;s degree with 4+ years of related experience in academia or industry. Experience operating NGS instruments (Illumina preferred) in research or production settings, high-throughput automation experience, and proficiency with digital lab systems such as LIMS, sample-tracking tools, and data analysis or visualization software are also desirable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$66,400.00 to $99,600.00</Salaryrange>
      <Skills>next-generation sequencing (NGS) technologies, molecular biology, laboratory technique, pipetting, contamination control, SOPs, workflow optimization, continuous improvement, high-throughput automation, digital lab systems, LIMS, sample-tracking tools, data analysis or visualization software</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures crop protection products, seeds, and biotechnology solutions.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976917930</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cf82e408-47b</externalid>
      <Title>Scaling Experiments for AI-designed Medicines</Title>
      <Description><![CDATA[<p>At Inceptive, you will help pioneer the next generation of AI-designed drugs, with the potential to positively impact billions of people, as part of a collaborative, interdisciplinary team.</p>
<p>Training on natural and experimentally-derived data is a core aspect of how our AI models learn to generate therapeutic molecules with exceptional properties. We invest deeply in building and scaling data sources to train and evaluate our models for maximal performance. At the same time, careful validation in orthogonal, translationally-relevant assays is crucial to orient our models.</p>
<p>We are seeking a senior scientific leader versed in high-throughput and translational biology to drive the continued growth and impact of our Palo Alto Lab. You will be a force multiplier for our scientists and engineers, ensuring a high standard of scientific rigor and output in a fast-paced, dynamic environment. You will also interface closely with our AI team to maximize the impact of internal data on Inceptive’s foundation models.</p>
<p>Your Mission, should you choose to accept it - Lead development and scale-up of high-throughput assays across in vitro, cellular, and in vivo systems, leveraging multiplexed assays and laboratory automation - Define and execute a lab strategy aligned with Inceptive’s therapeutic and platform priorities, while remaining flexible as modality and partnership needs evolve - Champion an interdisciplinary culture, encouraging curiosity, rigor, and collaboration across scientific boundaries - Manage, mentor, and develop a multidisciplinary team of scientists and engineers that rapidly generates high-impact biological data to improve and validate AI design models - Oversee the development of industry-standard validation assays to keep models and data generation aligned with downstream therapeutic application</p>
<p>Qualifications - PhD and 6+ years of post-PhD experience in industrial research applied to drug development - Experience managing and mentoring a data-driven lab of 10+ scientists - Experience using high-throughput and/or highly multiplexed assays to generate rich datasets from mammalian cells - Proven ability to set scientific direction while also executing operationally - Deep understanding of theory, techniques, and experimental design in molecular and cellular biology - Visualization, analysis, and statistics applied to complex biological datasets - Availability to work with team members across US and Europe, with meetings starting at 7am PT - Readiness to travel several times a year for company retreats and business events - We value in-person collaboration and expect candidates to work at our lab location</p>
<p>Preferred skills - Translational experience with mRNA, oligonucleotides, or other genetic medicines - Expertise in immunology and/or cell therapy - Hands-on experience in RNA biology and biochemistry - Scientific programming in Python - Hands-on experience with modern data engineering workflows</p>
<p>Compensation $245K – $305K + Bonus + Equity</p>
<p>What we offer - A competitive compensation package - 30 days paid vacation per year - Comprehensive health insurance for US based Beginners - 401K with company match for US based Beginners and Direktversicherung for German Beginners - Quarterly company-wide retreats - Monthly wellness benefit - Budget for multiple visits per year to our offices in Berlin, Palo Alto or Switzerland - Learning &amp; Development budget to attend conferences, take courses, or otherwise invest in your professional growth, as well as access to the Learning &amp; Development platform EdX and Hone - A buddy to help you get settled *Varies by country and does not apply to internships</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245K – $305K + Bonus + Equity</Salaryrange>
      <Skills>high-throughput and translational biology, AI model development, data science, molecular and cellular biology, experimental design, translational experience with mRNA, oligonucleotides, or other genetic medicines, expertise in immunology and/or cell therapy, hands-on experience in RNA biology and biochemistry, scientific programming in Python, hands-on experience with modern data engineering workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Biotechnology</Industry>
      <Employername>Inceptive</Employername>
      <Employerlogo>https://logos.yubhub.co/inceptive.com.png</Employerlogo>
      <Employerdescription>Inceptive is a biotechnology company developing AI-designed drugs.</Employerdescription>
      <Employerwebsite>https://inceptive.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/inceptive/jobs/5060348007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2637f59-e14</externalid>
      <Title>Full-Stack Software Engineer, Reinforcement Learning</Title>
      <Description><![CDATA[<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day.\n\nYou don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast. This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next.\n\nYou&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.\n\nAnthropic&#39;s Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We&#39;ve contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models.\n\nOur work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.\n\nThe RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.\n\nOur engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.\n\n### Responsibilities\n\n<em>   Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows\n</em>   Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction\n<em>   Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early\n</em>   Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking\n<em>   Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure\n</em>   Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels\n<em>   Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks\n</em>   Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products\n\n### Requirements\n\n<em>   Strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend\n</em>   Proficient in Python and a modern web stack (React, TypeScript, or similar)\n<em>   Track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible\n</em>   Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket\n<em>   Found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it\n</em>   Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers\n<em>   Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work\n</em>   Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before\n<em>   Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it\n\n### Nice to Have\n\n</em>   Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types\n<em>   Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows\n</em>   Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines\n<em>   Familiarity with LLM training, fine-tuning, or evaluation workflows\n</em>   Experience with async Python (Trio, asyncio) or high-throughput API design\n<em>   Background in dashboards, monitoring, or observability tooling\n</em>   Experience working directly with external vendors or partners on technical integrations\n<em>   A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope\n\n### Representative Projects\n\n</em>   Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks\n<em>   Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation\n</em>   Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training\n<em>   Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments\n</em>   Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training\n*   Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Modern web stack, React, TypeScript, Strong software engineering fundamentals, Full-stack range, Database schema, Frontend, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation platforms, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company working on developing artificial intelligence systems. It has a quickly growing team of researchers, engineers, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186067008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc17980d-461</externalid>
      <Title>Research Engineer, Interpretability</Title>
      <Description><![CDATA[<p>JOB TITLE: Research Engineer, Interpretability \n LOCATION: San Francisco, CA \n DEPARTMENT: AI Research &amp; Engineering \n \n JOB DESCRIPTION: \n \n When you see what modern language models are capable of, do you wonder, &quot;How do these things work? How can we trust them?&quot; \n \n The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe. \n \n Think of us as doing &quot;neuroscience&quot; of neural networks using &quot;microscopes&quot; we build - or reverse-engineering neural networks like binary programs. \n \n More resources to learn about our work: \n - Our research blog - covering advances including Monosemantic Features and Circuits \n - An Introduction to Interpretability from our research lead, Chris Olah \n - The Urgency of Interpretability from CEO Dario Amodei \n - Engineering Challenges Scaling Interpretability - directly relevant to this role \n - 60 Minutes segment - Around 8:07, see a demo of tooling our team built \n - New Yorker article - what it&#39;s like to work on one of AI&#39;s hardest open problems \n \n Even if you haven&#39;t worked on interpretability before, the infrastructure expertise is similar to what&#39;s needed across the lifecycle of a production language model: \n - Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips \n - Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model&#39;s internal activations mid-forward-pass - for example, adding a &quot;steering vector&quot; \n - Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission \n \n The science keeps scaling - and it&#39;s now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI. \n \n RESPONSIBILITIES: \n - Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application \n - Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams \n - Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers \n - Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations \n - Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling \n \n YOU MAY BE A GOOD FIT IF YOU: \n - Have 5-10+ years of experience building software \n - Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python \n - Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks \n - Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions \n - Prefer fast-moving collaborative projects to extensive solo efforts \n - Are curious about interpretability research and its role in AI safety (though no research experience is required!) \n - Care about the societal impacts and ethics of your work \n - Are comfortable working closely with researchers, translating research needs into engineering solutions. \n \n STRONG CANDIDATES MAY ALSO HAVE EXPERIENCE WITH: \n - Optimizing the performance of large-scale distributed systems \n - Language modeling fundamentals with transformers \n - High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization \n - Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs \n - Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges \n \n REPRESENTATIVE PROJECTS: \n - Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations \n - Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them \n - Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization \n - Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research) \n \n ROLE SPECIFIC LOCATION POLICY: \n - This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis. \n \n The annual compensation range for this role is listed below. \n For sales roles, the range provided is the role&#39;s On Target Earnings (\&quot;OTE\&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. \n Annual Salary:\\$315,000-\\$560,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$315,000-$560,000 USD</Salaryrange>
      <Skills>Python, Rust, Go, Java, PyTorch, CUDA, JAX, XLA, High Performance LLM optimization, memory management, compute efficiency, parallelism strategies, inference throughput optimization, large-scale distributed systems, language modeling fundamentals, transformers, collaborating closely with researchers, building tooling to support research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980430008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>72ebb09d-b37</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b6f9322-a9a</externalid>
      <Title>Staff Engineer, Storage Engine</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Staff Engineer, Storage Engine to join their team. The successful candidate will design and implement distributed storage solutions to support scaling data-intensive AI workloads. They will contribute to the development of exabyte-scale, S3-compatible object storage and integrate dedicated storage clusters into diverse customer environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and implementing distributed storage solutions to support scaling data-intensive AI workloads</li>
<li>Contributing to the development of exabyte-scale, S3-compatible object storage</li>
<li>Integrating dedicated storage clusters into diverse customer environments</li>
<li>Working with technologies such as RDMA, GPU Direct Storage, and distributed filesystems protocols such as NFS or FUSE to optimize storage performance and efficiency</li>
<li>Leading efforts to improve the reliability, durability, security, and observability of the storage stack</li>
<li>Collaborating with operations teams to monitor, troubleshoot, and improve storage systems in production environments</li>
<li>Setting the bar for developing metrics and dashboards to provide visibility into storage performance and health</li>
<li>Analyzing telemetry and system data to drive improvements in throughput, latency, and resilience</li>
<li>Working cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack</li>
<li>Sharing knowledge and mentoring other engineers on best practices in building distributed, high-performance systems</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Bachelor&#39;s, Master&#39;s, or PhD degree in Computer Science, Engineering, or a related field</li>
<li>8-10+ years of experience working in storage systems engineering or infrastructure</li>
<li>Strong hands-on experience with object storage or distributed filesystems in production environments</li>
<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar</li>
<li>Proficiency in a systems programming language such as Go, C, or Rust</li>
<li>Proficiency leveraging AI tools to augment software development</li>
<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana)</li>
<li>Experience working with cloud-native infrastructure, Kubernetes, and scalable system architectures</li>
</ul>
<p>The base salary range for this role is $188,000 to $275,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>distributed storage, object storage, S3-compatible object storage, RDMA, GPU Direct Storage, distributed filesystems protocols, NFS, FUSE, storage performance and efficiency, reliability, durability, security, observability, telemetry, system data, throughput, latency, resilience, cloud-native infrastructure, Kubernetes, scalable system architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4612047006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>40d32156-365</externalid>
      <Title>Reliability Lead, Common Services</Title>
      <Description><![CDATA[<p>As Reliability Lead, Common Services, you will establish and lead the Reliability Engineering and production operations practice for the Common Services organization. You&#39;ll partner closely with engineering leaders and teams across Common Services to define how we build, release, monitor, and operate critical services,raising the bar on reliability, availability, and operational excellence across the board.</p>
<p>In this role, you will:</p>
<ul>
<li>Establish and lead the SRE / production engineering practice for the Common Services organization, including standards for reliability, incident management, and on-call, in partnership with the central Product Engineering organization.</li>
<li>Develop an Operational Excellence strategy that focuses on not only improving system performance but also monitoring and reducing operational toil</li>
<li>Partner with engineering and product teams to define SLOs, SLIs, and error budgets for critical Common Services, and ensure these become part of how teams plan and make tradeoffs.</li>
<li>Own and improve the incident management lifecycle for Common Services, including on-call rotations, escalation paths, incident tooling, post-incident reviews, and follow-through on corrective actions.</li>
<li>Drive the observability strategy (metrics, logs, traces, dashboards, alerts) for Common Services, ensuring we have actionable visibility into the health, performance, and capacity of key systems.</li>
<li>Collaborate with engineering leads to design and review architectures for reliability, scalability, resilience, and operability, including failure modes, redundancy, and graceful degradation.</li>
<li>Lead efforts to automate and harden operational workflows, including deployments, rollbacks, configuration management, change management, and routine maintenance tasks.</li>
<li>Build strong, trust-based relationships with partner teams and stakeholders, becoming a go-to leader for production readiness and operational risk within Common Services.</li>
<li>Hire, mentor, and develop SRE and production engineering talent, fostering a culture of continuous improvement, learning from incidents, and humane on-call.</li>
<li>Partner with other SRE and production engineering leaders across CoreWeave to align on global practices, tools, and reliability goals, representing the needs and constraints of Common Services.</li>
</ul>
<p>You will be responsible for defining the reliability strategy, processes, and standards for the Common Services portfolio and driving consistent, high-quality operational practices across multiple teams.</p>
<p>The base salary range for this role is $206,000 to $303,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$206,000 to $303,000</Salaryrange>
      <Skills>Site Reliability Engineering, Production Engineering, Linux-based production environments, Containers, Orchestration technologies, Observability stacks, Alerting systems, SLIs/SLOs, Error budgets, Incident management, On-call rotations, Escalation paths, Post-incident reviews, Corrective actions, Automation tooling, Infrastructure-as-code, CI/CD pipelines, GPU workloads, High-performance computing, Latency/throughput-sensitive systems, Multi-tenant environments, Multi-region environments, Regulated environments, Service ownership models, Mentoring, Managing senior engineers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for AI development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650165006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>97212bdf-dd1</externalid>
      <Title>Research Engineer, Interpretability</Title>
      <Description><![CDATA[<p>Job Title: Research Engineer, Interpretability</p>
<p>About the Role:</p>
<p>When you see what modern language models are capable of, do you wonder, &quot;How do these things work? How can we trust them?&quot; The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe.</p>
<p>Think of us as doing &quot;neuroscience&quot; of neural networks using &quot;microscopes&quot; we build - or reverse-engineering neural networks like binary programs.</p>
<p>More resources to learn about our work:</p>
<ul>
<li>Our research blog - covering advances including Monosemantic Features and Circuits</li>
</ul>
<ul>
<li>An Introduction to Interpretability from our research lead, Chris Olah</li>
</ul>
<ul>
<li>The Urgency of Interpretability from CEO Dario Amodei</li>
</ul>
<ul>
<li>Engineering Challenges Scaling Interpretability - directly relevant to this role</li>
</ul>
<ul>
<li>60 Minutes segment - Around 8:07, see a demo of tooling our team built</li>
</ul>
<ul>
<li>New Yorker article - what it&#39;s like to work on one of AI&#39;s hardest open problems</li>
</ul>
<p>Even if you haven&#39;t worked on interpretability before, the infrastructure expertise is similar to what&#39;s needed across the lifecycle of a production language model:</p>
<ul>
<li>Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips</li>
</ul>
<ul>
<li>Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model&#39;s internal activations mid-forward-pass - for example, adding a &quot;steering vector&quot;</li>
</ul>
<ul>
<li>Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission</li>
</ul>
<p>The science keeps scaling - and it&#39;s now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application</li>
</ul>
<ul>
<li>Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams</li>
</ul>
<ul>
<li>Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers</li>
</ul>
<ul>
<li>Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations</li>
</ul>
<ul>
<li>Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5-10+ years of experience building software</li>
</ul>
<ul>
<li>Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python</li>
</ul>
<ul>
<li>Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks</li>
</ul>
<ul>
<li>Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions</li>
</ul>
<ul>
<li>Prefer fast-moving collaborative projects to extensive solo efforts</li>
</ul>
<ul>
<li>Are curious about interpretability research and its role in AI safety (though no research experience is required!)</li>
</ul>
<ul>
<li>Care about the societal impacts and ethics of your work</li>
</ul>
<ul>
<li>Are comfortable working closely with researchers, translating research needs into engineering solutions.</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Optimizing the performance of large-scale distributed systems</li>
</ul>
<ul>
<li>Language modeling fundamentals with transformers</li>
</ul>
<ul>
<li>High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization</li>
</ul>
<ul>
<li>Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs</li>
</ul>
<ul>
<li>Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges</li>
</ul>
<p>Representative Projects:</p>
<ul>
<li>Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations</li>
</ul>
<ul>
<li>Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them</li>
</ul>
<ul>
<li>Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization</li>
</ul>
<ul>
<li>Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research)</li>
</ul>
<p>Role Specific Location Policy:</p>
<ul>
<li>This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis.</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $315,000-$560,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$315,000-$560,000 USD</Salaryrange>
      <Skills>Python, Rust, Go, Java, PyTorch, CUDA, JAX, XLA, Transformers, High Performance LLM optimization, Memory management, Compute efficiency, Parallelism strategies, Inference throughput optimization, Optimizing the performance of large-scale distributed systems, Language modeling fundamentals, Collaborating closely with researchers and building tooling to support research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980430008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2fe8215c-605</externalid>
      <Title>Senior Software Engineer, Storage Infrastructure</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Emerging Technologies &amp; Incubation (ETI)</p>
<p>ETI is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers. Cloudflare&#39;s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>
<p>About the Team</p>
<p>ETI&#39;s Storage Infrastructure team is responsible for the core storage layer that underpins many of ETI&#39;s stateful services. Our scope ranges from managing the physical hardware to operating the distributed databases and storage systems built upon it. We run this infrastructure globally across Cloudflare&#39;s network, which presents unique and complex engineering puzzles. We navigate efficiently expanding storage capacity, optimizing rebuild operations, and coordinating operations across failure domains to uphold durability.</p>
<p>While other service teams focus on product development, our mission is to ensure the underlying storage is reliable, performant, and scalable. You&#39;ll be joining a highly motivated team that is building the next generation of distributed storage services.</p>
<p>Responsibilities</p>
<p>In this role, you will help build and operate the next generation of globally distributed storage systems. You will own your code from inception to release, delivering solutions at all layers of the stack. On any given day, you might write a design document for a new provisioning system, model failure domain dependencies across edge locations, benchmark new storage hardware, build standardized observability and runbooks for distributed database clusters, or automate operational toil through purpose-built tooling and intelligent automation.</p>
<p>You can expect to interact with a variety of languages and technologies including Rust, Go, Saltstack, and Terraform.</p>
<p>Examples of desirable skills, knowledge, and experience</p>
<ul>
<li>Strong programming skills with languages like Rust, Go, or Python</li>
<li>A solid understanding of distributed systems concepts such as consistency, consensus, data replication, fault tolerance, and partition tolerance</li>
<li>Experience with distributed databases and storage systems</li>
<li>Experience with infrastructure configuration tooling and infrastructure as code</li>
<li>Familiarity with storage fundamentals: block devices, filesystems, SSD characteristics</li>
<li>Experience building and maintaining high-throughput, low-latency systems</li>
<li>Understanding of network fundamentals as they relate to distributed storage -- bandwidth constraints, latency tradeoffs, cross-datacenter replication</li>
<li>Strong written and verbal communication skills and ability to explain technical decisions clearly</li>
<li>Comfortable operating in fast-paced environments with tight deadlines and evolving priorities</li>
</ul>
<p>Benefits</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits</p>
<ul>
<li>Medical/Rx Insurance</li>
<li>Dental Insurance</li>
<li>Vision Insurance</li>
<li>Flexible Spending Accounts</li>
<li>Commuter Spending Accounts</li>
<li>Fertility &amp; Family Forming Benefits</li>
<li>On-demand mental health support and Employee Assistance Program</li>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
<li>Life &amp; Accident Insurance</li>
<li>401(k) Retirement Savings Plan</li>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo</p>
<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project</p>
<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1</p>
<p>We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, Python, Distributed systems, Consistency, Consensus, Data replication, Fault tolerance, Partition tolerance, Distributed databases, Storage systems, Infrastructure configuration tooling, Infrastructure as code, Storage fundamentals, Block devices, Filesystems, SSD characteristics, High-throughput systems, Low-latency systems, Network fundamentals, Bandwidth constraints, Latency tradeoffs, Cross-datacenter replication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet. It runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7629805</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>18ae1499-b22</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>76fd624c-e23</externalid>
      <Title>Full-Stack Software Engineer, Reinforcement Learning</Title>
      <Description><![CDATA[<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day. You don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast.</p>
<p>This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next. You&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.</p>
<p>Our work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models. The RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.</p>
<p>Our engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows</li>
<li>Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction</li>
<li>Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early</li>
<li>Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking</li>
<li>Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure</li>
<li>Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels</li>
<li>Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks</li>
<li>Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend</li>
<li>Are proficient in Python and a modern web stack (React, TypeScript, or similar)</li>
<li>Have a track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible</li>
<li>Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket</li>
<li>Have found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it</li>
<li>Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers</li>
<li>Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work</li>
<li>Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before</li>
<li>Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types</li>
<li>Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows</li>
<li>Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines</li>
<li>Familiarity with LLM training, fine-tuning, or evaluation workflows</li>
<li>Experience with async Python (Trio, asyncio) or high-throughput API design</li>
<li>Background in dashboards, monitoring, or observability tooling</li>
<li>Experience working directly with external vendors or partners on technical integrations</li>
<li>A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope</li>
</ul>
<p>Representative Projects:</p>
<ul>
<li>Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks</li>
<li>Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation</li>
<li>Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training</li>
<li>Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments</li>
<li>Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training</li>
<li>Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Modern web stack, React, TypeScript, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186067008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42187d42-78e</externalid>
      <Title>Staff Engineer (Backend, DevOps, Infrastructure)</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property managers alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.</p>
<p>Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We&#39;re on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.</p>
<p>As a Staff Engineer, you will:</p>
<p>Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.</p>
<p>Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.</p>
<p>Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.</p>
<p>You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you&#39;ll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.</p>
<p>You&#39;ll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding.</p>
<p>As our first US-based engineer, you&#39;ll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform.</p>
<p>You&#39;ll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems.</p>
<p>Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.</p>
<p>We&#39;re looking for a Staff Engineer to help us bring that future to life. This is not just another dev role. You&#39;ll be hands-on shaping the technical DNA of Zuma. You&#39;ll architect critical systems, tame legacy code, build net-new AI-powered experiences, and lay down the patterns future engineers will inherit.</p>
<p>If you&#39;re obsessed with building real products people use, especially products powered by LLMs, this might be your playground.</p>
<p><strong>Why This Could Be Your Dream Role</strong></p>
<ul>
<li>You&#39;ll work directly with cutting-edge LLM technology in a real-world application</li>
<li>You want to work at a company where customers feel your impact every day</li>
<li>You&#39;ll architect AI-powered systems that are transforming the real estate industry</li>
<li>You&#39;ll have autonomy to design and implement innovative technical solutions</li>
<li>Your work will directly impact thousands of apartment communities and millions of renters</li>
<li>You&#39;ll receive significant equity in a venture-backed company with strong traction</li>
<li>As we scale, your role and influence will grow with the company</li>
</ul>
<p><strong>Why You Might Want to Think Twice</strong></p>
<ul>
<li>This is a demanding role that will often require extended hours and deep commitment</li>
<li>As a founding team member, you&#39;ll need to wear multiple hats and step outside your comfort zone</li>
<li>You&#39;ll need to make thoughtful tradeoffs between innovation and immediate needs</li>
<li>You&#39;ll interact directly with customers to understand their needs and occasionally travel to their offices</li>
<li>We&#39;re a startup - priorities can shift rapidly as we respond to market opportunities and customer needs</li>
<li>If you&#39;re not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn&#39;t the job for you</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation</li>
<li>Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands</li>
<li>Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products</li>
<li>Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability</li>
<li>Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform</li>
<li>Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions</li>
<li>Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</li>
</ul>
<p><strong>Your Experience Looks Like</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field</li>
<li>5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability</li>
<li>Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services</li>
<li>Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)</li>
<li>Hands-on experience with database design, performance tuning, and scaling high-throughput data systems</li>
<li>Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices</li>
<li>Strong communication skills and ability to work effectively in a distributed, fast-paced environment</li>
<li>Comfortable operating in early-stage, high-ownership environments with evolving requirements</li>
<li>Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure</li>
<li>Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</li>
</ul>
<p><strong>Guiding Principles</strong></p>
<ul>
<li>Customer‑First Outcomes</li>
</ul>
<p>Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.</p>
<ul>
<li>Bias for Simplicity</li>
</ul>
<p>We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.</p>
<ul>
<li>Quality Is a Gate, Not an After‑Thought</li>
</ul>
<p>Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.</p>
<ul>
<li>Data‑Driven Choices</li>
</ul>
<p>We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.</p>
<ul>
<li>Transparency &amp; Written Culture</li>
</ul>
<p>Good ideas don’t expire in Zoom. We operate in public i</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/800b8d69-b1e0-4524-a0a7-a5cec8b337b5</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c20d7221-4b5</externalid>
      <Title>Support Engineer</Title>
      <Description><![CDATA[<p>As a Support Engineer at Zuma, you&#39;ll be a bridge between our customers, engineering team, and product vision. You&#39;ll ensure new customers onboard smoothly, integrations run reliably, and support operations scale as we grow. This is a hands-on role for someone who loves problem-solving, can dive into APIs and databases, and takes pride in clear documentation and communication.</p>
<p>You&#39;ll help property managers succeed with our AI platform while also driving continuous improvements in our internal tools and processes.</p>
<p>Responsibilities:
Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation
Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands
Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products
Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability
Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform
Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions
Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</p>
<p>Your Experience Looks Like:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
3+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability
Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services
Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)
Hands-on experience with database design, performance tuning, and scaling high-throughput data systems
Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices
Strong communication skills and ability to work effectively in a distributed, fast-paced environment
Comfortable operating in early-stage, high-ownership environments with evolving requirements
Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure
Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</p>
<p>Guiding Principles:
Customer‑First Outcomes
Bias for Simplicity
Quality Is a Gate, Not an After‑Thought
Data‑Driven Choices
Transparency &amp; Written Culture</p>
<p>Other Benefits:
Great health insurance, dental, and vision
Gym and workspace stipends
Computer and workspace enhancements
Unlimited PTO
Opportunity to play a critical role in building the foundations of the company and Engineering culture</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management businesses across the US and Canada, a ~$200B market.</Employerdescription>
      <Employerwebsite>https://www.zuma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/da4d2130-954e-4b29-a9ef-3926b9bedba6</Applyto>
      <Location>US and Canada</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c06968e6-9e2</externalid>
      <Title>Build Strategy &amp; Throughput Engineer</Title>
      <Description><![CDATA[<p>The Build Strategy &amp; Throughput engineer supports defining how ships are built in a way that directly informs how the shipyard itself is designed. This role sits at the intersection of product definition, build strategy, and facility architecture, ensuring that buildings, cranes, transport routes, and production flow are optimized for throughput, repeatability, and scalability.</p>
<p>Core Responsibilities:
Define block-based build strategies explicitly compatible with facility geometry and crane systems
Establish block size, weight, and envelope targets that optimize throughput and flow
Translate vessel class, annual build rate, and steel tonnage into facility capacity requirements
Define pre-outfitting rules that maximize upstream labor and minimize shipboard work
Identify and eliminate facility-driven throughput bottlenecks early in design
Ensure material flow supports parallel workstreams and predictable sequencing
Integrate build strategy across facilities, heavy lift, manufacturing engineering, and design teams</p>
<p>Required Experience &amp; Skills:
Experience in block-based ship construction or heavy modular fabrication environments
Strong understanding of crane-driven production systems and facility constraints
Demonstrated throughput and production-flow mindset
Ability to work early with incomplete information to shape irreversible facility decisions
Comfort operating at the interface of design, facilities, and operations</p>
<p>Preferred Background:
Experience influencing shipyard layouts, crane strategies, or major facility decisions
Background in shipbuilding and facility layout design
Experience in 3D modeling work flows</p>
<p>Benefits:
Medical Insurance: Comprehensive health insurance plans covering a range of services
Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care
Saronic pays 100% of the premium for employees and 80% for dependents
Time Off: Generous PTO and Holidays
Parental Leave: Paid maternity and paternity leave to support new parents
Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses
Retirement Plan: 401(k) plan
Stock Options: Equity options to give employees a stake in the company’s success
Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage
Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</p>
<p>Additional Information:
This role requires access to export-controlled information or items that require “U.S. Person” status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>block-based ship construction, heavy modular fabrication environments, crane-driven production systems, facility constraints, throughput and production-flow mindset, 3D modeling work flows, shipyard layouts, crane strategies, major facility decisions, shipbuilding and facility layout design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions that enhance maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/3bbad9f7-e50d-428d-8ea4-adbb05c878f5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cbdf504b-0d3</externalid>
      <Title>Shipyard Optimization Manager</Title>
      <Description><![CDATA[<p>We are seeking a Shipyard Optimization Manager to serve as a technical authority for shipyard build strategy, facility architecture, manufacturing engineering, operational flow, and automation integration.</p>
<p>This is a senior, architect-level SME role responsible for supporting the definition of how ships are built and ensuring that facilities, equipment, automation, and operations are structurally aligned to a throughput-driven production system.</p>
<p>Key responsibilities include defining shipyard build strategies by vessel class, reviewing facility layouts and spatial design, integrating manufacturing engineering principles into facility and process design, and advising on major equipment and crane selection strategies.</p>
<p>The ideal candidate will have 10+ years of experience across shipyard facilities, manufacturing engineering, and operations, with demonstrated experience designing or transforming shipyard facilities.</p>
<p>Benefits include comprehensive health insurance, generous PTO and holidays, paid parental leave, competitive salary, and equity options.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>shipyard facilities, manufacturing engineering, operations, facility architecture, automation integration, greenfield shipyard or major brownfield re-architecture projects, high-throughput global shipyards or equivalent heavy industry, automated panel lines and block assembly halls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/a7e73fb3-bf27-4c44-ad4b-990e7644bf9a</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2a88ee59-dc6</externalid>
      <Title>Full Stack Engineer (Serverless)</Title>
      <Description><![CDATA[<p>We&#39;re building the fastest and most scalable infrastructure for AI inference. As a Full Stack Engineer on Serverless, you will build the core product across frontend and backend that powers our Serverless platform. This is a deeply product-focused role where you will work side-by-side with Product and Infrastructure to design and ship reusable, scalable systems that enterprise customers rely on in production every day.</p>
<p>You will be a foundational technical owner of our Serverless product as it scales to thousands of enterprise customers, with real responsibility, autonomy, and impact. This is a chance to help build a new product vertical from the ground up inside a company that is already scaling at rocket-ship speed.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and maintaining core Serverless UI features (dashboards, logs, observability, configuration, usage)</li>
<li>Designing and implementing backend APIs that power the Serverless product experience</li>
<li>Improving performance, reliability, and scalability of customer-facing systems</li>
<li>Working closely with Infrastructure to ensure product features align with platform capabilities</li>
<li>Owning features end-to-end, from design through production and iteration</li>
</ul>
<p>We&#39;re looking for a strong experience working across both frontend and backend, proficiency with TypeScript, Python, Postgres, and Next.js, and experience owning features end-to-end in production systems. Ability to context switch between UI, backend, and performance work, product-minded engineer who values clean abstractions and long-term maintainability, comfortable working in a fast-moving, low-process environment.</p>
<p>Nice to have experience building developer platforms or infrastructure-adjacent products, familiarity with observability tooling (logging, metrics, tracing) in production environments, background in distributed systems, container orchestration, or cloud-native architectures, experience with real-time systems, streaming logs, or high-throughput data pipelines, exposure to technologies such as Kubernetes, Prometheus, Datadog, gRPC, or similar systems, entrepreneurial mindset and strong ownership mentality.</p>
<p>We offer interesting and challenging work, competitive salary and equity, a lot of learning and growth opportunities, visa sponsorship and relocation assistance, health, dental, and vision insurance, regular team events and offsite.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $230,000 + equity + comprehensive benefits package</Salaryrange>
      <Skills>TypeScript, Python, Postgres, Next.js, serverless, backend APIs, frontend development, observability tooling, distributed systems, container orchestration, cloud-native architectures, real-time systems, streaming logs, high-throughput data pipelines, Kubernetes, Prometheus, Datadog, gRPC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal builds infrastructure for AI inference and has scaled to handle tens of millions of requests per day.</Employerdescription>
      <Employerwebsite>https://www.fal.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4112697009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3999ca5d-6fc</externalid>
      <Title>Engineering Manager, Privy</Title>
      <Description><![CDATA[<p>About Privy</p>
<p>Privy is a developer tooling company that empowers users to take control of their online presence. We&#39;re looking for an experienced Engineering Manager to lead and grow a team of Infrastructure engineers.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and grow a high-performing team of Infrastructure engineers</li>
<li>Drive the future vision of infrastructure alongside talented infrastructure engineers</li>
<li>Hold the team accountable to excellence in quality, throughput, and performance</li>
<li>Ensure the team is working on the right scope of work and projects, align decisions with business impact</li>
<li>Fill gaps as a player-coach; review PRs, write and review design docs, investigate incidents</li>
<li>Coach engineers towards growth and their career goals</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a talented team of engineers</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Requirements</p>
<ul>
<li>Deep ownership and a high-level perspective on driving overall business impact</li>
<li>Performance-oriented mindset, with a high bar for quality and excellence</li>
<li>Technical excellence to be able to independently evaluate quality and technical feedback</li>
<li>High emotional maturity, insightfulness, and care</li>
<li>Strong past experience as a manager and leader</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Experience with designing and operating systems supporting hundreds of millions of users</li>
<li>Secure enclave platforms, like AWS Nitro Enclaves</li>
<li>Observability, incident response, capacity planning, performance tuning, and infrastructure automation (IaC, CI/CD for infra)</li>
<li>Background in building low-latency, high-throughput systems for trading or payment processing</li>
<li>Any blend of public cloud/BYOC architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>secure enclave platforms, observability, incident response, capacity planning, performance tuning, infrastructure automation, IaC, CI/CD for infra, designing and operating systems, low-latency, high-throughput systems, public cloud/BYOC architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Privy</Employername>
      <Employerlogo>https://logos.yubhub.co/privy.com.png</Employerlogo>
      <Employerdescription>Privy builds simple, flexible developer tooling that enables users to take control of their online presence.</Employerdescription>
      <Employerwebsite>https://privy.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7729216</Applyto>
      <Location>NYC-Privy</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>7309ff6b-6c7</externalid>
      <Title>FBS IT Vendor Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations. We build a global approach to identifying, recruiting, hiring, and retaining top talent. Our teams are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>This role develops, manages and maintains Client Group SLAs with our IT vendors. It assembles the MSLA components into appropriate customer facing services. Also produces regular reports to assess actual performance /availability of IT services against SLAs and reviews performance with key contacts.</p>
<p><strong>Responsibilities</strong></p>
<p>Develops and maintains customer facing SLAs regarding IT Services, working with various entities (Finance, Supplier, Business), as required</p>
<p>Understands charging principles and explains detailed costs to the business throughput measures, produces reports to assess actual performance and availability of services against SLAs, and reviews results with key contacts</p>
<p>Ensures the local SLA is kept up to date to meet changing requirements, understands the customer impact of new developments, and engages appropriate suppliers to meet the business needs; Conducts surveys, analyses customer satisfaction results, and initiates appropriate improvement programs</p>
<p>Understands any business variation in operational demand and engages appropriate suppliers to meet business needs</p>
<p>Engages with Program Managers to understand the customer impact of new developments and makes changes to SLAs as appropriate Produces the Service Agreements between legal entities, working with Finance, Legal and suppliers</p>
<p>Explains detailed charges/costs to the business and understands charging principles Understands and implements business throughput measures Mentors/coaches lower-level staff</p>
<p><strong>Requirements</strong></p>
<p>4-6 years of experience in a similar role</p>
<p>BS in Computer Science or similar</p>
<p>Previous experience in Finance / Insurance / Healthcare / or Regulated industries (PLUS)</p>
<p>Full English Fluency</p>
<p>Experience in the IT Vendor in IT Role Area</p>
<p>Internal Expense Report systems</p>
<p>Soft Skills</p>
<p>Planning and attention to detail</p>
<p>Negotiation &amp; Communication</p>
<p>Influence and driving conversation</p>
<p>Technical Experience</p>
<p>Quickbase (Desirable)</p>
<p>Power Apps (Desirable)</p>
<p>MS Office</p>
<p>Service Now (Desirable)</p>
<p>Power BI (Desirable)</p>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<p>A competitive salary and performance-based bonuses.</p>
<p>Comprehensive benefits package.</p>
<p>Flexible work arrangements (remote and/or office-based).</p>
<p>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</p>
<p>Private Health Insurance.</p>
<p>Paid Time Off.</p>
<p>Training &amp; Development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT Vendor, SLA Management, Charging Principles, Customer Satisfaction, Business Throughput Measures, MS Office, Quickbase, Power Apps, Service Now, Power BI, Negotiation, Communication, Influence, Driving Conversation</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across over 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/765vKq4Q4BiSPBjY9pQHxz/hybrid-fbs-it-vendor-specialist-in-bogot%C3%A1-at-capgemini</Applyto>
      <Location>Bogotá, Bogota, Colombia</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>7bbea21e-cf5</externalid>
      <Title>Sr. Manager, Supply Chain Risk Mitigation &amp; Supply Operations</Title>
      <Description><![CDATA[<p>The purpose of this role is to proactively mitigate supply chain risks that could potentially impact Ford Motor Company&#39;s manufacturing operations. You will lead a team of highly motivated individuals dedicated to resolving a caseload of operationally and financially distressed production suppliers. This position demands a strategic thinker who can develop and deliver robust risk mitigation strategies aimed at protecting Ford&#39;s Vehicle and Powertrain production plants. You will be instrumental in executing interim and permanent corrective actions with suppliers to ensure a resilient supply signal to Ford Motor Company, ensuring alignment with Ford&#39;s Supply Chain Organisation&#39;s strategic business objectives. Your ability to communicate effectively across all levels of the organisation, both within the function and with other functions and suppliers, will be paramount to your success.  ## Responsibilities  ### Strategic Leadership &amp; Team Management  <em> Lead, mentor, and develop a high-performing team of Supply Operations Specialists, fostering a culture of accountability and continuous improvement. </em> Support the overall growth and development of the Supply Operations Team (SOT) function, consistent with the ONE Ford initiative and Ford OS Leadership Behaviours. <em> Establish and monitor key performance indicators (KPIs) for the SOT, ensuring effective alignment with organisational goals and holding team members accountable for meeting targets. </em> Provide high-level, concise reporting to senior management regarding supply chain performance, challenges, and recommended actions. <em> Provide coaching and counselling to direct reports (3-4) to develop the next generation of leaders, fostering key talent and managing succession plans. </em> Establish appropriate staffing levels in accordance with the global headcount model and embrace diversity within all aspects of the business.  ### Risk Mitigation &amp; Crisis Management  <em> Oversee the resolution of highly complex supplier issues that pose an imminent threat to production. </em> Direct the team responsible for engaging suppliers classified as “no confidence” through robust reviews (“Look-See” / Assessment) and those classified as “imminent risk to disrupt” Ford Manufacturing by developing comprehensive stabilisation plans (“Way Forward”). <em> Collaborate cross-functionally within Ford to ensure alignment of resources around the critical goal of &quot;No Lost Units&quot; due to supply disruptions. </em> Develop and implement proactive crisis management plans to respond effectively to supplier-related disruptions or emergencies, ensuring vehicle/powertrain production remains uninterrupted. <em> Lead and support the execution of comprehensive risk mitigation plans, ensuring supply risk is effectively minimised.  ### Operational Excellence &amp; Collaboration  </em> Ensure seamless &quot;Supply Signals&quot; pass through the Control Tower, reviewing and conveying operational challenges back to the Control Tower. <em> Direct teams to collaborate with supplier partners to drive innovation and process improvements that positively affect supplier performance and help deliver &quot;No Lost Units.&quot; </em> Work closely with our supply base to guarantee uninterrupted supply for North American Vehicle and Powertrain production. <em> Direct the team with respect to the Business Transfer process, assigning roles and responsibilities to SOT Senior Operations Specialists and Operations Specialists. </em> Build strong relationships with other functions, joint venture partners, and external suppliers to foster a collaborative environment.  ### Compliance &amp; Advocacy  <em> Ensure the team&#39;s compliance with company policies, best practices, and audit requirements. </em> Advocate and champion the use of standardised work processes and methodologies across the function.  ## Qualifications  ### Minimum Qualifications  <em> Bachelor’s degree in Industrial Engineering, Mechanical Engineering, Supply Chain, Business Administration, or a related field. </em> 8+ years of working experience in Manufacturing Operations, with Supply Chain experience strongly preferred. <em> 3+ years of Managerial / People Leader experience. </em> Ability to speak and write fluently in English to effectively communicate with global teams. <em> A strong understanding of manufacturing processes, throughput, and program management is required. </em> Ability to prepare and present material effectively to Senior Leadership. <em> High level of analytical ability for unusual, difficult, or complex problems. </em> Ability to lead problem-solving activities and manage resolution targets. <em> Ability to handle multiple time-sensitive and urgent matters, anticipating business needs and comfortably dealing with ambiguity. </em> Ability to assess complex supply chain issues, lead cross-functional teams, and develop creative solutions to minimise vehicle production losses. <em> Proficient computer skills, including MS Suite/Excel. </em> Ability to travel up to 75% to supplier plant locations for onsite support of operationally distressed suppliers; must be available on weekends as required. <em> Ability to be flexible and, at times, work extended and non-core hours to ensure issues are resolved. </em> Self-motivated, independent, and resourceful; able to anticipate business needs and comfortable dealing with ambiguity.  ### Preferred Qualifications  <em> Understanding of Safe Launch, APQP, PPAP, and Industrialisation processes. </em> Familiarity with Ford Systems knowledge, including CMMS, VPP, OTG, and WERS.  ## Job Info  <em> Job Identification: 59631 </em> Job Category: Supply Chain <em> Posting Date: 03/05/2026, 07:29 PM </em> Apply Before: 03/20/2026, 08:00 PM <em> Job Schedule: Full time </em> Locations: Henry Ford No. 100, Piso 5, Naucalpan de Juárez, MEX, 53126, MX (Hybrid) * Remote: No</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Supply Chain Risk Mitigation, Supply Operations, Manufacturing Operations, Supply Chain Experience, Managerial Experience, People Leadership, English Fluency, Manufacturing Processes, Throughput, Program Management, Analytical Ability, Problem-Solving, Time Management, Supply Chain Issues, Cross-Functional Teams, Creative Solutions, Computer Skills, MS Suite, Excel, Safe Launch, APQP, PPAP, Industrialisation, Ford Systems Knowledge, CMMS, VPP, OTG, WERS</Skills>
      <Category>Supply Chain</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker that designs, manufactures, and markets vehicles and automotive-related products.</Employerdescription>
      <Employerwebsite>https://efds.fa.em5.oraclecloud.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59631</Applyto>
      <Location>Naucalpan de Juarez</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>cd8c3fa1-00e</externalid>
      <Title>Senior Applied Scientist</Title>
      <Description><![CDATA[<p>Microsoft is aggressively advancing its position in the digital advertising market by building a state-of-the-art online advertising platform and services. Our team is at the forefront of innovation, leveraging cutting-edge AIGC and LLM to empower advertisers for demand growth and enhance user experience. This includes visual content optimization and generation. We are seeking passionate scientists and engineers to join this world-class team, solve challenging problems, and deliver products that provide value to hundreds of millions of users and advertisers, driving direct, measurable impact on our global business.</p>
<p>Responsibilities:</p>
<p>Pioneer research and stay up-to-date with the latest advancements in AIGC and VLM, applying SOTA technologies and frameworks to enhance the quality and performance of ad content generation.</p>
<p>Optimize and scale high-performance, large-volume systems to reliably handle massive datasets and ensure high throughput.</p>
<p>Lead the data collection, model training, evaluation, and deployment of advanced image processing and generation algorithms.</p>
<p>Analyse system performance and identify opportunities based on data analysis and online experiments.</p>
<p>Collaborate effectively with cross-functional teams (e.g., product manager, engineering, research) to deliver high-quality, end-to-end solutions.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</p>
<p>4+ years of industry experience with a focus on computer vision, image/video generation and editing.</p>
<p>Demonstrated leadership or significant contributions to the design, data, training of advanced AIGC models (e.g. diffusion/AR models and distillation, VAEs etc.).</p>
<p>Excellent design and problem-solving skills, with a proven ability to translate ambiguous problems into clear, implementable solutions.</p>
<p>Proactive communication skills, with the ability to collaborate effectively across algo, product, and engineer teams.</p>
<p>Preferred Qualifications:</p>
<p>Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</p>
<p>5+ years of industry experience with a focus on computer vision, image/video generation and editing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AIGC, LLM, computer vision, image/video generation and editing, statistics, predictive analytics, research, data collection, model training, evaluation, deployment, advanced image processing and generation algorithms, data analysis, online experiments, cross-functional teams, diffusion/AR models and distillation, VAEs, SOTA technologies and frameworks, high-performance, large-volume systems, massive datasets, high throughput</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-applied-scientist-30/</Applyto>
      <Location>Beijing</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b6d3e1a4-190</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p><strong>Software Engineer</strong></p>
<p>Replit is seeking a skilled Software Engineer to join our team. As a Software Engineer at Replit, you will design and develop collaborative software applications that enable humans and AI agents to work together on shared shells, filesystems, and state.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design a collaborative &#39;Multiplayer Computer&#39; that lets humans and AI agents work together on shared shells, filesystems, and state—conflict-free and in real time</li>
<li>Build high-throughput backend applications and services</li>
<li>Create tooling that helps AI systems minimize mistakes through static analysis and deterministic techniques</li>
<li>Develop infrastructure (frontend &amp; backend) that empowers product engineers to rapidly ship delightful user experiences</li>
<li>Support sophisticated user interfaces, including terminals, code editors, window-management systems, and innovative experiences that require both creativity and algorithmic skill</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor of Computer Science or equivalent and 4 years of experience as a Software Engineer</li>
<li>Must also possess 2 years experience building frontend; 2 years experience building backend; 2 years experience in at least one of the following: Developing rich and complex browser-based applications, Building high-throughput and novel backend services, Creating frontend infrastructure used by 100+ engineers, or Shipping your own products and engaging directly with users</li>
<li>2 years of experience with Typescript; Must be able to successfully complete competency-based interviews</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p><strong>Want to learn more about what we are up to?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
<li>Replit: Make an app for that</li>
<li>Replit Blog</li>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
<li>Reasons not to work at Replit</li>
</ul>
<p>To achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially candidates from underrepresented and non-traditional backgrounds.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$187,574 - $235,000</Salaryrange>
      <Skills>Typescript, Backend development, Frontend development, Rich and complex browser-based applications, High-throughput and novel backend services, Frontend infrastructure, Product development, Static analysis, Deterministic techniques, Collaborative software development, AI systems, User experience design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a significant player in the software development industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/d5c36729-3c19-4d62-a72f-d8bdb6513f0e</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>8c164f95-f8d</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Senior Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking Senior Infrastructure Engineers who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyse reliability problems across our stack, then design and implement software and systems to address them. You will build robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability.</p>
<p><strong>You Will:</strong></p>
<ul>
<li>Drive Automation and Infrastructure as Code: Build and improve automation to eliminate toil and operational work. Maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks and implement capacity planning strategies.</li>
<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>
<li>Drive Cross-Team Improvements: Partner with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>
<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the engineering lifecycle, from local development to production monitoring.</li>
<li>Debug and Harden Systems: Dive deep into debugging difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>
<li>Collaborate on Design Reviews: Participate in feature and system design reviews, contributing expertise on security, scale, and operational considerations.</li>
<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>
</ul>
<p><strong>Required Skills and Experience:</strong></p>
<ul>
<li>4+ years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering).</li>
<li>Strong programming skills in languages like Python or Go.</li>
<li>You write high-quality, well-tested code.</li>
<li>Solid understanding of distributed systems. You&#39;ve built, scaled, and maintained production services and understand service-oriented architecture.</li>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>
<li>Experience implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>
<li>Strong incident management skills with experience participating in incident response and demonstrated critical thinking under pressure.</li>
<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>
<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly.</li>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Experience with Google Cloud Platform (GCP) services and tools.</li>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>
<li>Experience building reliable systems capable of handling high throughput and low latency.</li>
<li>Experience with Go and Terraform.</li>
<li>Familiarity with working in rapid-growth environments.</li>
</ul>
<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K - $240K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Terraform, Kubernetes, Docker, GCP, Monitoring/observability solutions, Debugging and performance tuning, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform (GCP) services and tools, Modern observability platforms (Prometheus, Grafana, Datadog, etc.), Building reliable systems capable of handling high throughput and low latency, Go and Terraform, Familiarity with working in rapid-growth environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading platform in the software development industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/16c85abc-763c-4f36-ab67-64f416343384</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>f6802a94-1d7</externalid>
      <Title>AI Deployment Engineer, Gov</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>AI Deployment Engineer, Gov</strong></p>
<p><strong>Location</strong></p>
<p>Washington, DC</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>OpenAI for Gov</p>
<p><strong>Compensation</strong></p>
<ul>
<li>Remote- Zone A$137K – $250.2K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the team</strong></p>
<p>The AI Deployment Engineering team is responsible for ensuring the safe and effective deployment of Generative AI applications. We act as a trusted advisor and thought partner for our customers, working to build an effective backlog of GenAI use cases for their industry and drive them to production through strong technical guidance. As the founding AI Deployment Engineer in the Public Sector segment, you’ll help government agencies transform their organization through solutions such as automated content generation, contextual search, and novel applications that make use of our newest, most exciting models and technology.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a solutions-oriented technical leader to partner with our public sector customers and ensure they achieve tangible value with GenAI. You will pair with government agencies (federal, state, and local), policymakers, and other public institutions to establish a GenAI strategy and identify the highest value applications. You’ll then partner with their technical teams, subject matter experts, systems integrators, and implementation partners to move from prototype through production. You’ll take a holistic view of their needs and design an architecture using the OpenAI API and other services to maximize customer value. You will collaborate closely with Sales, Solutions Engineering, Global Affairs, Applied Research, and Product teams.</p>
<p>This role is based in Washington, DC. We offer relocation support to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Deeply embed with our most sophisticated public sector customers as the technical lead, serving as their technical thought partner to ideate and build novel applications on our API.</li>
</ul>
<ul>
<li>Work with senior customer stakeholders to identify the best applications of GenAI in their industry and to build/qualify a comprehensive backlog to support their AI roadmap.</li>
</ul>
<ul>
<li>Intervene directly to accelerate customer time to value through building hands-on prototypes and/or by delivering impactful strategic guidance, often in collaboration with systems integrators and implementation partners.</li>
</ul>
<ul>
<li>Forge and manage relationships with our customers’ and implementation partners’ leadership and stakeholders to ensure the successful deployment and scale of their applications.</li>
</ul>
<ul>
<li>Contribute to our open-source developer and enterprise resources.</li>
</ul>
<ul>
<li>Scale the AI Deployment Engineering function through sharing knowledge, codifying best practices, and publishing notebooks to our internal and external repositories.</li>
</ul>
<ul>
<li>Validate, synthesize, and deliver high-signal feedback to the Product, Engineering, and Research teams.</li>
</ul>
<p><strong>You’ll thrive in this role if you:</strong></p>
<ul>
<li>Bring 7+ years of technical consulting (or equivalent) experience with public sector customers (U.S. federal preferred), bridging technical teams and senior stakeholders.</li>
</ul>
<ul>
<li>Active TS/SCI clearance</li>
</ul>
<ul>
<li>Have successfully led GenAI or traditional ML implementations for government agencies in close collaboration with systems integrators and implementation partners.</li>
</ul>
<ul>
<li>Understand network and cloud architecture, including experience with on-premise deployments.</li>
</ul>
<ul>
<li>Are an effective and polished communicator who can translate business and technical topics to all audiences.</li>
</ul>
<ul>
<li>Have industry experience in programming languages like Python or Javascript.</li>
</ul>
<ul>
<li>Own problems end-to-end and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Are an effective, high throughput operator who can drive multiple concurrent projects and prioritize ruthlessly.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$137K – $250.2K</Salaryrange>
      <Skills>Technical consulting, Public sector customers, GenAI, Traditional ML, Network and cloud architecture, On-premise deployments, Python, Javascript, TS/SCI clearance, Effective and polished communicator, Humble attitude, Eagerness to help colleagues, Desire to make team succeed, High throughput operator, Ability to drive multiple concurrent projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a private company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/be7b1bf5-37ab-40f7-9ec1-e9732244f12a</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>8492e28a-8d9</externalid>
      <Title>AI Deployment Engineer</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>Sydney, Australia</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p><strong>About the Team</strong> The AI Deployment Engineering team is responsible for ensuring the safe and effective deployment of Generative AI applications for developers and enterprises. We act as a trusted advisor and thought partner for our customers, working to build an effective backlog of GenAI use cases for their industry and drive them to production through strong technical guidance. As an AI Deployment Engineer, you’ll help the largest companies transform their business through solutions such as customer service, automated content generation, and novel applications that make use of our newest, most exciting models.</p>
<p><strong>About the Role</strong> We are looking for a driven solutions leader with a product mindset to partner with our customers and ensure they achieve tangible business value with GenAI. You will pair with senior customer leaders to establish a GenAI strategy and identify the highest value applications. You’ll then partner with their technical teams to move from prototype through production. You’ll take a holistic view of their needs and design an enterprise architecture using ChatGPT, OpenAI API, and other services to maximize customer value. You will collaborate closely with Sales, Solutions Engineering, Applied Research, and Product teams, and you will report to the Head of Technical Success, APAC.</p>
<p>This role is based in Sydney, Australia. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Deeply embed with our most strategic platform customers as the technical lead, serving as their technical thought partner to ideate and build novel applications on our API.</li>
<li>Work with senior customer stakeholders to identify the best applications of GenAI in their industry and to build/qualify a comprehensive backlog to support their AI roadmap.</li>
<li>Intervene directly to accelerate customer time to value through building hands-on prototypes and/or by delivering impactful strategic guidance.</li>
<li>Forge and manage relationships with our customers’ leadership and stakeholders to ensure the successful deployment and scale of their applications.</li>
<li>Contribute to our open-source developer and enterprise resources.</li>
<li>Scale the Solutions Architect function through sharing knowledge, codifying best practices, and publishing notebooks to our internal and external repositories.</li>
<li>Validate, synthesize, and deliver high-signal feedback to the Product and Research teams.</li>
</ul>
<p><strong>You’ll thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years of technical consulting (or equivalent) experience, bridging technical teams and senior business stakeholders.</li>
<li>Are an effective and polished communicator who can translate business and technical topics to all audiences.</li>
<li>Have led complex implementations of Generative AI/traditional ML solutions and have knowledge of network/cloud architecture.</li>
<li>Have industry experience in programming languages like Python or Javascript.</li>
<li>Own problems end-to-end and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
<li>Are an effective, high throughput operator who can drive multiple concurrent projects and prioritize ruthlessly.</li>
</ul>
<p><strong>About OpenAI</strong> OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Generative AI, ChatGPT, OpenAI API, Python, Javascript, Network/cloud architecture, Technical consulting, Polished communication, Problem-solving, Humble attitude, Eagerness to help colleagues, Desire to make team succeed, High throughput operator</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/83730f3e-3a96-476e-afb0-15f1f045ab03</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>df8265dc-c31</externalid>
      <Title>System Software Engineer, Consumer Products</Title>
      <Description><![CDATA[<p><strong>System Software Engineer, Consumer Products</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>Location: San Francisco, CA (Hybrid: 4 days onsite/week). Relocation assistance available.</p>
<p><strong>About the Team:</strong></p>
<p>We build foundational platform software that enables reliable, secure, and performant products. The team works across system layers and partners closely with adjacent engineering groups to deliver robust capabilities from concept through launch.</p>
<p><strong>About the Role:</strong></p>
<p>We’re seeking a Systems Software Engineer to design, implement, and debug core platform components and the pipelines that build and update system images. You’ll work across operating system layers, focusing on performance, security, and deep system debugging to ship production‑grade systems.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, implement, and debug system‑level components and services across kernel and user space.</li>
</ul>
<ul>
<li>Configure and maintain OS platform services (init, services, networking, security policies) and related tooling.</li>
</ul>
<ul>
<li>Build and operate image and update pipelines, ensuring reliability, reproducibility, and rollback safety.</li>
</ul>
<ul>
<li>Instrument and analyze performance using profiling and tracing; optimize CPU, memory, I/O, and power usage.</li>
</ul>
<ul>
<li>Own platform observability and reliability: logging, crash capture, watchdogs, and diagnostics.</li>
</ul>
<ul>
<li>Collaborate with cross‑functional teams to define interfaces and deliver end‑to‑end features.</li>
</ul>
<ul>
<li>Establish strong engineering practices: code review, CI, reproducible builds, and release management.</li>
</ul>
<ul>
<li>Partner with external suppliers to support builds and deployments.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have shipped production systems software on modern operating systems.</li>
</ul>
<ul>
<li>Are proficient in C/C++ and a scripting language, and comfortable with OS internals (concurrency, memory management, filesystems, networking, power management).</li>
</ul>
<ul>
<li>Bring strong systems debugging skills using debuggers, tracers, profilers, and logs across kernel/user‑space boundaries.</li>
</ul>
<ul>
<li>Understand configuration of platform services and interfaces, and can translate requirements into stable, well‑documented APIs.</li>
</ul>
<ul>
<li>Are fluent in user‑space foundations (service management, IPC, networking, packaging, automation).</li>
</ul>
<ul>
<li>Have experience building platform images and designing update mechanisms for reliability and security.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Exposure to platform security (secure boot, sandboxing, mandatory access controls, attestation).</li>
</ul>
<ul>
<li>Experience with graphics/media, hardware acceleration, or high‑throughput data paths.</li>
</ul>
<ul>
<li>Familiarity with connectivity stacks and network configuration.</li>
</ul>
<ul>
<li>Observability and diagnostics in distributed or resource‑constrained environments.</li>
</ul>
<ul>
<li>Work on open‑source platforms or contributions to systems projects.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $325K • Offers Equity</Salaryrange>
      <Skills>C/C++, Scripting language, OS internals, Debuggers, Tracers, Profilers, Logs, Platform services, Networking, Security policies, Image and update pipelines, Reliability, Reproducibility, Rollback safety, Performance analysis, CPU, Memory, I/O, Power usage, Platform observability, Reliability, Logging, Crash capture, Watchdogs, Diagnostics, Code review, CI, Reproducible builds, Release management, Platform security, Secure boot, Sandboxing, Mandatory access controls, Attestation, Graphics/media, Hardware acceleration, High-throughput data paths, Connectivity stacks, Network configuration, Observability and diagnostics, Distributed or resource-constrained environments, Open-source platforms, Contributions to systems projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/20f525b7-f958-4c95-a055-f914ab3adb95</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>d35418d6-8fb</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will be responsible for designing and developing scalable backend systems, making key architectural decisions, and collaborating with multiple teams to deliver impactful solutions. You will contribute to shopping features in Copilot, Bing, and Edge browser, working closely with Product Management and Design teams. Your primary focus will be on building robust, long-lasting solutions that meet the needs of our customers.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Partners with appropriate stakeholders to determine user requirements for a set of scenarios.</li>
<li>Leads identification of dependencies and the development of design documents for a product, application, service, or platform.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in designing and scaling high-throughput distributed systems and robust data pipelines.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Customer focused, strategic, drives for results, is self-motivated, and has a propensity for action.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of CAD $142,400 – CAD $257,500 per year.</li>
<li>Opportunity to work on cutting-edge AI technology.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>CAD $142,400 – CAD $257,500 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, designing and scaling high-throughput distributed systems, robust data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to push the boundaries of AI, with a vision to build systems that have true artificial intelligence across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-22/</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f34af95b-456</externalid>
      <Title>UVM Verification Engineer, Senior Staff</Title>
      <Description><![CDATA[<p>Opening.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will join the Synopsys IP Group, a highly collaborative and innovative team focused on developing leading-edge interface IP solutions for memory technologies.</p>
<ul>
<li>Developing detailed verification testplans and comprehensive functional coverage models for complex memory interface IP.</li>
<li>Implementing scalable UVM testbench infrastructure and designing robust test cases to verify training firmware functionality on RTL PHY models.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Proficiency in SystemVerilog and UVM, with hands-on experience using simulation and waveform debugging tools.</li>
<li>Strong background in developing verification solutions focused on productivity, performance, and throughput.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SystemVerilog, UVM, simulation, waveform debugging, verification solutions, productivity, performance, throughput</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/nepean/uvm-verification-engineer-senior-staff/44408/91168885696</Applyto>
      <Location>Nepean, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-02-04</Postedate>
    </job>
    <job>
      <externalid>714bb462-ac3</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p>We are seeking developers who want to contribute innovative solutions to our live service platform for one of the most creative companies in technology. You&#39;ll have the opportunity to work on scalable systems that handle massive data volumes while enabling real-time insights that drive business decisions across EA&#39;s global operations.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>You will work with cross-functional teams including Content Management &amp; Delivery, Messaging, Segmentation, Recommendation, and Experimentation to streamline the live services workflow.</li>
<li>You will evaluate where and how EA&#39;s live service solutions, studio tech stacks, and vendor solutions can work together and help to achieve both engineering and business goals in an efficient and cost-effective manner.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor/Master’s degree in Computer Science/related field or relevant experience.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong programming skills with expertise in software design, algorithms, and data structures, Proficiency in at least one programming language, preferably java and one scripting language, preferably python, Experience with front-end development technologies such as HTML, CSS, and JavaScript frameworks, preferably react, Experience working with online &amp; offline databases, including columnar databases, relational databases or document databases, Extensive hands-on experience with high-throughput, low-latency live services, personalization platforms, data pipelines, kafka and analytics systems, Experience managing high-traffic, 24/7 services with complex dependencies and multi-cloud architectures, preferably on AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story and part of a community that connects across the globe. EA is a place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/210752</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-01-13</Postedate>
    </job>
    <job>
      <externalid>24a714b6-547</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p>We are seeking developers who want to contribute innovative solutions to our live service platform for one of the most creative companies in technology. You&#39;ll have the opportunity to work on scalable systems that handle massive data volumes while enabling real-time insights that drive business decisions across EA&#39;s global operations.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>You will work with cross-functional teams including Content Management &amp; Delivery, Messaging, Segmentation, Recommendation, and Experimentation to streamline the live services workflow.</li>
<li>You will evaluate where and how EA&#39;s live service solutions, studio tech stacks, and vendor solutions can work together and help to achieve both engineering and business goals in an efficient and cost-effective manner.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor/Master’s degree in Computer Science/related field or relevant experience.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong programming skills with expertise in software design, algorithms, and data structures, Proficiency in at least one programming language, preferably java and one scripting language, preferably python, Experience with front-end development technologies such as HTML, CSS, and JavaScript frameworks, preferably react, Experience working with online &amp; offline databases, including columnar databases, relational databases or document databases, Extensive hands-on experience with high-throughput, low-latency live services, personalization platforms, data pipelines, kafka and analytics systems, Experience managing high-traffic, 24/7 services with complex dependencies and multi-cloud architectures, preferably on AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story and part of a community that connects across the globe. EA is a place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/210748</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-01-13</Postedate>
    </job>
  </jobs>
</source>