{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/cohere"},"x-facet":{"type":"skill","slug":"cohere","display":"Cohere","count":10},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ac45e205-e7d"},"title":"Engineering Manager, Inference Routing and Performance","description":"<p><strong>About the role\\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\\n\\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\\n\\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\\n\\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\\n\\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\\n\\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\\n\\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\\n\\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\\n\\nThe job is holding both.\\n\\n## Representative work:\\nThings the Inference Routing EM actually spends time on:\\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\\n\\n## What you&#39;ll do:\\nDrive system-level performance\\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\\n\\nDeliver reliably and operate cleanly\\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\\n\\nBuild and grow the team\\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\\n\\n## You may be a good fit if you:\\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\\n\\nStrong candidates may also have:\\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\\n- Led teams at supercomputing or hyperscaler infrastructure scale\\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\\n\\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\nAnnual Salary: $405,000-$485,000 USD</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ac45e205-e7d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155391008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["engineering management","distributed systems","load balancing","scheduling","cache-coherent distributed state","high-performance networking","machine learning systems"],"x-skills-preferred":["LLM inference serving","cluster schedulers","load balancers","service meshes","coordination planes","heterogeneous accelerator fleets","GPU/TPU/Trainium","GPU/accelerator programming","ML framework internals","OS-level performance debugging"],"datePosted":"2026-04-18T15:56:48.587Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, distributed systems, load balancing, scheduling, cache-coherent distributed state, high-performance networking, machine learning systems, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_468e1052-df7"},"title":"Staff AI & Agents Growth Product Manager","description":"<p>CoreWeave is seeking a Staff AI &amp; Agents Growth Product Manager to join our CIO Strategy &amp; Transformation organization. As a key member of our team, you will own the strategy, roadmap, and growth of AI agents that power CoreWeave&#39;s internal teams and workflows. On a day-to-day basis, you will identify high-impact use cases, design and launch agents, and continuously iterate based on data and user feedback.</p>\n<p>You will define how agents are sourced, built, evaluated, and scaled,whether developed in-house or with external vendors. This role requires close collaboration with business leaders, engineering, and IT to deliver agents that integrate deeply with enterprise systems and drive measurable business outcomes.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Identifying high-impact use cases and designing and launching AI agents</li>\n<li>Defining and owning product roadmaps with measurable outcomes</li>\n<li>Collaborating with business leaders, engineering, and IT to deliver agents that integrate deeply with enterprise systems</li>\n<li>Continuously iterating based on data and user feedback</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5-8 years of product management or closely related experience, including hands-on work with AI-, ML-, or automation-powered products</li>\n<li>Proven experience taking AI or data-driven products from concept to launch and driving adoption across multiple teams or functions</li>\n<li>Working knowledge of GenAI concepts (LLMs, embeddings, retrieval, prompt design, evaluation) and experience with AI platforms and APIs such as OpenAI, Gemini, Cohere, Glean, Perplexity, or similar</li>\n<li>Experience defining and owning product roadmaps with measurable outcomes (e.g., time saved, risk reduced, decision quality improved)</li>\n<li>Demonstrated ability to design and run experimentation frameworks, pilots, and A/B tests, using data to inform scale, iteration, or retirement decisions</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Experience building or scaling internal copilots or AI agents in an enterprise environment</li>\n<li>Exposure to AI compliance, security, governance, and human-in-the-loop supervision models</li>\n</ul>\n<p>If you&#39;re a curious and experienced product manager looking to join a dynamic team, please apply!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_468e1052-df7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4638816006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$161,000 to $237,000","x-skills-required":["product management","AI","ML","automation","GenAI","OpenAI","Gemini","Cohere","Glean","Perplexity"],"x-skills-preferred":["internal copilots","AI agents","AI compliance","security","governance","human-in-the-loop supervision"],"datePosted":"2026-04-18T15:52:28.947Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product management, AI, ML, automation, GenAI, OpenAI, Gemini, Cohere, Glean, Perplexity, internal copilots, AI agents, AI compliance, security, governance, human-in-the-loop supervision","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":161000,"maxValue":237000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1f33d5a1-6ed"},"title":"Model Behavior Tutor - Epistemic Rigor & Truthfulness","description":"<p>You will ensure Grok reasons carefully, resists motivated reasoning, and communicates uncertainty and evidence proportionately.</p>\n<p>Responsibilities: Assess model outputs for factual accuracy, logical coherence, fallacious reasoning, and hidden assumptions. Identify subtle ideological capture, statistical fallacies, and rhetorical sleights of hand. Write exemplary reasoning that models intellectual honesty, source evaluation, nuanced weighing of primary and secondary sources, and scoping of confidence. Construct adversarial examples and red-team prompts to expose remaining epistemic weaknesses. Contribute to the definition and scaling of constitutional principles for truth-seeking behavior.</p>\n<p>Basic Qualifications: Published analytical work and academic training in a high-rigor field. Strong Forecasting track record (e.g., Metaculus, Good Judgment), rigorous analysis, or public updating on errors. Deep knowledge in at least three of: philosophy of science, cognitive psychology, statistics, logic, linguistics, history, economics, or related disciplines. Ability to steel-man opposing views and separate settled knowledge from speculation. Habitual reliance on primary sources and base rates.</p>\n<p>Preferred Skills and Experience: Experience in intelligence analysis, investigative journalism, or academic peer review.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1f33d5a1-6ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5017518007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time|part-time|contract","x-salary-range":"$40/hour - $70/hour","x-skills-required":["factual accuracy","logical coherence","fallacious reasoning","ideological capture","statistical fallacies","rhetorical sleights of hand","intellectual honesty","source evaluation","nuanced weighing of primary and secondary sources","scoping of confidence","adversarial examples","red-team prompts","epistemic weaknesses","definition and scaling of constitutional principles for truth-seeking behavior"],"x-skills-preferred":["intelligence analysis","investigative journalism","academic peer review"],"datePosted":"2026-04-18T15:48:46.461Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"factual accuracy, logical coherence, fallacious reasoning, ideological capture, statistical fallacies, rhetorical sleights of hand, intellectual honesty, source evaluation, nuanced weighing of primary and secondary sources, scoping of confidence, adversarial examples, red-team prompts, epistemic weaknesses, definition and scaling of constitutional principles for truth-seeking behavior, intelligence analysis, investigative journalism, academic peer review"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9cd0420a-99d"},"title":"Network Engineer, Capacity and Efficiency","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a network engineer who thinks in metrics first. You will use deep networking knowledge and rigorous measurement to figure out where and how bandwidth, latency, and dollars are being used, find optimization opportunities and land them.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build the network observability stack. Design and deploy telemetry pipelines , sFlow/IPFIX, gNMI streaming, eBPF host probes , that turn packet counters into per-flow, per-tenant, per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.</li>\n<li>Hunt for efficiency. Analyze inter-region traffic patterns, identify hot links and stranded capacity, and quantify the dollar impact. Build the models that tell us whether we should buy more capacity, or move the workload.</li>\n<li>Own QoS and traffic engineering. Design and operate traffic classification, marking, and shaping across the backbone. Make sure bulk checkpoint transfers don’t starve latency-sensitive inference, and that we’re not paying premium cross-region rates for traffic that could take the cheap path.</li>\n<li>Drive cost attribution. Tie network spend , egress, interconnect ports, transit, optical leases , back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.</li>\n<li>Influence decisions you don&#39;t own. A large fraction of this role is convincing other teams to act on what your data shows: making the case to research that a traffic pattern needs to change, to finance that an interconnect tranche is worth buying, to Systems Networking that a QoS policy needs rewriting.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Have 5+ years operating large-scale production networks , data center fabrics (spine-leaf, Clos), backbone/WAN, or hyperscaler-adjacent environments.</li>\n<li>Are genuinely fluent across the stack: BGP (including policy and communities), ECMP, VXLAN/EVPN or equivalent overlays, QoS (DSCP, queuing, shaping), and L1/optical basics (DWDM, coherent, LAGs).</li>\n<li>Know at least one major CSP’s networking model deeply , AWS (VPC, TGW, Direct Connect, Gateway Load Balancer) or GCP (Shared VPC, Interconnect, Cloud Router, Network Connectivity Center) , and understand how their overlays interact with physical underlays.</li>\n<li>Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig), flow export (sFlow, IPFIX, NetFlow), or eBPF-based host-side instrumentation. You can reason about sampling, cardinality, and storage tradeoffs.</li>\n<li>Comfortable writing Python or Go to build tooling, telemetry pipelines, infrastructure-as-code, config management for network devices and automation, that you’ll ship to production.</li>\n<li>Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion, and you can turn messy counter data into a defensible cost model.</li>\n<li>Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters, and to a network engineer why a specific ECMP imbalance is costing real money.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>SRE experience for large-scale network infrastructure , designing for reliability, defining SLOs/SLIs for network services, capacity planning with error budgets, and incident response for network-impacting outages at scale.</li>\n<li>Background on a cloud provider&#39;s networking team or a cloud networking product team , building or operating the interconnect, backbone, or SDN control plane from the provider side, not just consuming it as a customer.</li>\n<li>Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce, all-gather), checkpoint/weight transfer, inference serving, and how these stress networks differ than traditional workloads in terms of burst behavior, flow synchronization, and bandwidth symmetry.</li>\n<li>Experience with HPC fabrics like InfiniBand, RoCE v2, lossless Ethernet, or custom high-radix topologies and an understanding of how job placement, congestion management, and adaptive routing interact at scale.</li>\n<li>Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.</li>\n<li>Hands-on time with multi-cloud connectivity: cross-cloud peering, private interconnect products, and the billing models that come with them.</li>\n<li>Experience building cost/chargeback systems for shared infrastructure, or FinOps exposure in a large cloud environment.</li>\n</ul>\n<p><strong>Representative Projects</strong></p>\n<ul>\n<li>Build a per-flow cost attribution pipeline that traces every byte of cross-region egress back to the team and workload that generated it</li>\n<li>Design QoS policy for the private backbone that prevents bulk checkpoint transfers from starving inference traffic</li>\n<li>Model whether it&#39;s cheaper to buy an additional 1.6Tb interconnect tranche or to re-route traffic through existing capacity</li>\n<li>Instrument DCN fabric utilization with streaming telemetry and build the Grafana dashboards that become the team&#39;s source of truth for network observability</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9cd0420a-99d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5177143008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["network engineering","network observability","telemetry pipelines","sFlow/IPFIX","gNMI streaming","eBPF host probes","BGP","ECMP","VXLAN/EVPN","QoS","DSCP","queuing","shaping","L1/optical basics","DWDM","coherent","LAGs","AWS","GCP","cloud networking","infrastructure-as-code","config management","automation","Python","Go","quantitative analysis","cost modeling","communication"],"x-skills-preferred":["SRE","cloud provider's networking team","cloud networking product team","AI/ML infrastructure traffic patterns","HPC fabrics","traffic engineering","multi-cloud connectivity","cost/chargeback systems","FinOps"],"datePosted":"2026-04-18T15:42:29.482Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"network engineering, network observability, telemetry pipelines, sFlow/IPFIX, gNMI streaming, eBPF host probes, BGP, ECMP, VXLAN/EVPN, QoS, DSCP, queuing, shaping, L1/optical basics, DWDM, coherent, LAGs, AWS, GCP, cloud networking, infrastructure-as-code, config management, automation, Python, Go, quantitative analysis, cost modeling, communication, SRE, cloud provider's networking team, cloud networking product team, AI/ML infrastructure traffic patterns, HPC fabrics, traffic engineering, multi-cloud connectivity, cost/chargeback systems, FinOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_63af8568-789"},"title":"Engineering Manager, Inference Routing and Performance","description":"<p><strong>About the role\\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\\n\\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\\n\\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\\n\\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\\n\\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\\n\\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\\n\\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\\n\\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\\n\\nThe job is holding both.\\n\\n## Representative work:\\nThings the Inference Routing EM actually spends time on:\\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\\n\\n## What you&#39;ll do:\\nDrive system-level performance\\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\\n\\nDeliver reliably and operate cleanly\\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\\n\\nBuild and grow the team\\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\\n\\n## You may be a good fit if you:\\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\\n\\nStrong candidates may also have:\\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\\n- Led teams at supercomputing or hyperscaler infrastructure scale\\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\\n\\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\nAnnual Salary: $405,000-$485,000 USD</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_63af8568-789","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155391008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["engineering management","deep systems background","load balancing","scheduling","cache-coherent distributed state","high-performance networking"],"x-skills-preferred":["LLM inference serving","cluster schedulers","load balancers","service meshes","coordination planes","heterogeneous accelerator fleets","GPU/TPU/Trainium","GPU/accelerator programming","ML framework internals","OS-level performance debugging"],"datePosted":"2026-04-18T15:37:38.038Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, deep systems background, load balancing, scheduling, cache-coherent distributed state, high-performance networking, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f99e744e-649"},"title":"Senior Quantum Engineer","description":"<p>We are looking for a Senior Quantum Engineer with expertise in superconducting qubits to join our hardware R&amp;D team.</p>\n<p>This role is for someone who wants to work on technically difficult problems that directly shape a quantum hardware platform. You will not be optimising within a fixed roadmap, you will help define it.</p>\n<p>You will design, execute, and interpret experiments that push the performance and scalability of superconducting qubit systems. You will develop new gate schemes, explore advanced control protocols, and test architectural ideas that can influence platform-level decisions.</p>\n<p>We operate in direct competition with the best-funded and most established teams in the world. We are looking for someone who finds that motivating.</p>\n<p>You will have a high degree of autonomy and ownership while working in a collaborative environment. If you have a strong technical hypothesis, you will be expected to test it rigorously and defend it with data. Strong ideas move quickly through experimental validation.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop experimental programs on superconducting qubit devices.</li>\n</ul>\n<ul>\n<li>Develop and optimise high-fidelity quantum gates.</li>\n</ul>\n<ul>\n<li>Design and test novel control and coupling strategies.</li>\n</ul>\n<ul>\n<li>Identify fundamental performance bottlenecks and isolate their physical origin.</li>\n</ul>\n<ul>\n<li>Analyse data with scientific rigour to extract insight, not just metrics.</li>\n</ul>\n<ul>\n<li>Collaborate across device, fabrication, and control teams to translate ideas into hardware progress.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>PhD in Physics, Applied Physics, Electrical Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>Extensive hands-on experience with superconducting qubits.</li>\n</ul>\n<ul>\n<li>Strong background in gate design, pulse engineering, and decoherence mechanisms.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to independently lead complex experimental efforts from concept to validated result.</li>\n</ul>\n<ul>\n<li>Intellectual independence, technical courage, and the ability to defend ideas with evidence.</li>\n</ul>\n<ul>\n<li>Motivation to compete at the highest technical level in the field.</li>\n</ul>\n<p>This position is best suited for someone who wants visible impact, real ownership, and the opportunity to help shape the direction of a quantum hardware platform, not just contribute to a small piece of it.</p>\n<p><strong>Additional Information</strong></p>\n<p>As engineering leaders, we value diversity and are committed to building a culture of inclusion to attract and engage innovative thinkers. Our technology, meant to serve all of humanity, cannot succeed if those who built it do not mirror the diversity of the communities we serve. Applications from women, minorities, and other under-represented groups are encouraged.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f99e744e-649","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Rigetti Computing","sameAs":"https://www.rigetti.com","logo":"https://logos.yubhub.co/rigetti.com.png"},"x-apply-url":"https://jobs.lever.co/rigetti/288d5644-744a-4989-b129-d742b0c10e1d","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["superconducting qubits","gate design","pulse engineering","decoherence mechanisms","experimental programming","high-fidelity quantum gates","novel control and coupling strategies","data analysis"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:54:54.438Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"superconducting qubits, gate design, pulse engineering, decoherence mechanisms, experimental programming, high-fidelity quantum gates, novel control and coupling strategies, data analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_94e4ded2-2fd"},"title":"Quantum Engineer","description":"<p>As a Quantum Engineer at Rigetti Computing, you will contribute to the development of our gate-based, superconducting quantum computers. Your work will involve characterizing experimental devices, assessing their performance, identifying areas for improvement, and providing feedback to internal development teams.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li><p>Providing technical focus and leadership on a particular R&amp;D topic, such as gate calibration, qubit relaxation and/or dephasing, piloting new research ideas in small processors, integration of new ideas into large processors, or implementation of quantum error correction.</p>\n</li>\n<li><p>Collaborating across diverse teams of expert engineers to turn experimental concepts into robust technologies that work at large QPU scales.</p>\n</li>\n<li><p>Staying current with cutting-edge research, synthesizing results from the literature, and implementing state-of-the-art calibration, error diagnostic, and system optimization techniques to enhance device performance.</p>\n</li>\n</ul>\n<p>Required qualifications include:</p>\n<ul>\n<li><p>PhD in Physics or similar applied scientific field, or Masters&#39; in Physics with 2+ years relevant industry experience.</p>\n</li>\n<li><p>Demonstrated abilities in experimental problem solving and experimental research.</p>\n</li>\n<li><p>Expertise in the practical implementation of coherent quantum effects and control.</p>\n</li>\n<li><p>Proficiency using Python in a scientific context.</p>\n</li>\n<li><p>Ability to thrive in a strongly collaborative environment, with a team-first mindset.</p>\n</li>\n</ul>\n<p>Nice to have qualifications include:</p>\n<ul>\n<li><p>Experience calibrating and characterizing single- and two-qubit quantum gates in superconducting quantum devices.</p>\n</li>\n<li><p>Experience utilizing quantum simulation frameworks, such as QuTip.</p>\n</li>\n<li><p>Proven capability to execute and effectively communicate outcomes from innovative research and development initiatives.</p>\n</li>\n<li><p>Software development experience in a collaborative industry setting.</p>\n</li>\n<li><p>Previous technical role in an industrial or startup setting.</p>\n</li>\n</ul>\n<p>Additional information:</p>\n<ul>\n<li><p>As engineering leaders, we value diversity and are committed to building a culture of inclusion to attract and engage innovative thinkers.</p>\n</li>\n<li><p>Applications from women, minorities, and other under-represented groups are encouraged.</p>\n</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_94e4ded2-2fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Rigetti Computing","sameAs":"https://www.rigetti.com","logo":"https://logos.yubhub.co/rigetti.com.png"},"x-apply-url":"https://jobs.lever.co/rigetti/0f66c1c3-12be-4e60-a7e1-25e494ee2841","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Experimental problem solving","Experimental research","Coherent quantum effects and control","Quantum simulation frameworks"],"x-skills-preferred":["Calibrating and characterizing single- and two-qubit quantum gates","Utilizing quantum simulation frameworks","Software development","Previous technical role in an industrial or startup setting"],"datePosted":"2026-04-17T12:54:13.460Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Experimental problem solving, Experimental research, Coherent quantum effects and control, Quantum simulation frameworks, Calibrating and characterizing single- and two-qubit quantum gates, Utilizing quantum simulation frameworks, Software development, Previous technical role in an industrial or startup setting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b4d3cb52-7c4"},"title":"Senior ASIC Verification Engineer, Coherent High Speed Interconnect","description":"<p>We are now looking for a Senior ASIC Verification Engineer for our Coherent High Speed Interconnect team. For two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics. With our invention of the GPU - the engine of modern visual computing - the field has grown to encompass video games, movie production, product design, medical diagnosis, and scientific research.</p>\n<p>Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning. This new model - where deep neural networks are trained to recognize patterns from meaningful amounts of data - has shown to be deeply effective at solving the most sophisticated problems in everyday life.</p>\n<p>As a Senior ASIC Verification Engineer at NVIDIA, you will verify the design and implementation of our innovative high speed coherent interconnects for our mobile SoCs and GPUs. This position offers the opportunity to have real impact in a multifaceted, technology-focused company impacting product lines ranging from consumer graphics to self-driving cars and the growing field of artificial intelligence.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>In this position, you will be responsible for verification of high-speed coherent interconnect design, architecture and golden models.</li>\n<li>You will be responsible for micro-architecture using sophisticated verification methodologies.</li>\n<li>As a member of our verification team, you&#39;ll understand the design &amp; implementation, define the verification scope, develop the verification infrastructure (Testbenches, BFMs, Checkers, Monitors), complete test/coverage plans, and verify the correctness of the design. This role will collaborate with architects, designers, emulation, and silicon verification teams to accomplish your tasks.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelors or Master’s Degree (or equivalent experience)</li>\n<li>3+ years of relevant verification experience</li>\n<li>Experience in architecting test bench environments for unit level verification</li>\n<li>Background in verification using random stimulus along with functional coverage and assertion-based verification methodologies</li>\n<li>Prior Design or Verification experience of Coherent high-speed interconnects</li>\n<li>Knowledge of industry standard interconnect protocols like PCIE, CXL, CHI will be useful</li>\n<li>Strong background developing TB&#39;s from scratch using SV and UVM methodology is desired</li>\n<li>C++ programming language experience, scripting ability and an expertise in System Verilog</li>\n<li>Exposure to design and verification tools (VCS or equivalent simulation tools, debug tools like Debussy, GDB)</li>\n<li>Strong debugging and analytical skills</li>\n<li>Experienced communication and interpersonal skills are required. A history of mentoring junior engineers and interns a huge plus.</li>\n</ul>\n<p>NVIDIA is widely considered to be one of the technology world’s most desirable employers! We have some of the most forward-thinking and dedicated people in the world working for us. If you&#39;re creative and autonomous, we want to hear from you.</p>\n<p>You will also be eligible for equity and benefits.</p>\n<p>Applications for this job will be accepted at least until March 13, 2026.</p>\n<p>This posting is for an existing vacancy.</p>\n<p>NVIDIA uses AI tools in its recruiting processes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b4d3cb52-7c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"NVIDIA","sameAs":"https://nvidia.wd5.myworkdayjobs.com","logo":"https://logos.yubhub.co/nvidia.com.png"},"x-apply-url":"https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-ASIC-Verification-Engineer--Coherent-High-Speed-Interconnect_JR2010025","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Verification of high-speed coherent interconnect design, architecture and golden models","Micro-architecture using sophisticated verification methodologies","Testbenches, BFMs, Checkers, Monitors","System Verilog","C++ programming language","Design and verification tools (VCS or equivalent simulation tools, debug tools like Debussy, GDB)"],"x-skills-preferred":["Random stimulus along with functional coverage and assertion-based verification methodologies","Prior Design or Verification experience of Coherent high-speed interconnects","Knowledge of industry standard interconnect protocols like PCIE, CXL, CHI"],"datePosted":"2026-03-09T20:46:52.056Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US, CA, Santa ClaraUS, MA, WestfordUS, TX, AustinUS, OR, Hillsboro"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Verification of high-speed coherent interconnect design, architecture and golden models, Micro-architecture using sophisticated verification methodologies, Testbenches, BFMs, Checkers, Monitors, System Verilog, C++ programming language, Design and verification tools (VCS or equivalent simulation tools, debug tools like Debussy, GDB), Random stimulus along with functional coverage and assertion-based verification methodologies, Prior Design or Verification experience of Coherent high-speed interconnects, Knowledge of industry standard interconnect protocols like PCIE, CXL, CHI"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61448503-aa0"},"title":"Design Verification Engineer","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Design Verification Engineer</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$226K – $445K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team:</strong></p>\n<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong> OpenAI is developing custom silicon to power the next generation of frontier AI models. We’re looking for experienced Design Verification (DV) Engineers to ensure functional correctness and robust design for our cutting-edge ML accelerators. You will play a key role in verifying complex hardware systems—ranging from individual IP blocks to subsystems and full SoC—working closely with architecture, RTL, software, and systems teams to deliver reliable silicon at scale.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Own the verification of one or more of: custom IP blocks, subsystems (compute, interconnect, memory, etc.), or full-chip SoC-level functionality.</li>\n</ul>\n<ul>\n<li>Define verification plans based on architecture and microarchitecture specs.</li>\n</ul>\n<ul>\n<li>Develop constrained-random, directed, and system-level testbenches using SystemVerilog/UVM or equivalent methodologies.</li>\n</ul>\n<ul>\n<li>Build and maintain stimulus generators, checkers, monitors, and scoreboards to ensure high coverage and correctness.</li>\n</ul>\n<ul>\n<li>Drive bug triage, root cause analysis, and work closely with design teams on resolution.</li>\n</ul>\n<ul>\n<li>Contribute to regression infrastructure, coverage analysis, and closure for both block- and top-level environments.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>BS/MS in EE/CE/CS or equivalent with 3+ years of experience in hardware verification.</li>\n</ul>\n<ul>\n<li>Proven success verifying complex IP or SoC designs in industry-standard flows</li>\n</ul>\n<ul>\n<li>Proficient in SystemVerilog, UVM, and common simulation and debug tools (e.g., VCS, Questa, Verdi).</li>\n</ul>\n<ul>\n<li>Strong knowledge of computer architecture concepts, memory and cache systems, coherency, interconnects, and/or ML compute primitives.</li>\n</ul>\n<ul>\n<li>Familiarity with performance modeling, formal verification, or emulation is a plus.</li>\n</ul>\n<ul>\n<li>Experience working in fast-paced, cross-disciplinary teams with a passion for building reliable hardware.</li>\n</ul>\n<p>_To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations._</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61448503-aa0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3a415c1d-4f66-4578-8eb3-8b15ef0ab52b","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$226K – $445K • Offers Equity","x-skills-required":["SystemVerilog","UVM","VCS","Questa","Verdi","BS/MS in EE/CE/CS or equivalent","3+ years of experience in hardware verification","Proven success verifying complex IP or SoC designs in industry-standard flows"],"x-skills-preferred":["Computer architecture concepts","Memory and cache systems","Coherency","Interconnects","ML compute primitives","Performance modeling","Formal verification","Emulation"],"datePosted":"2026-03-06T18:41:15.010Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SystemVerilog, UVM, VCS, Questa, Verdi, BS/MS in EE/CE/CS or equivalent, 3+ years of experience in hardware verification, Proven success verifying complex IP or SoC designs in industry-standard flows, Computer architecture concepts, Memory and cache systems, Coherency, Interconnects, ML compute primitives, Performance modeling, Formal verification, Emulation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":226000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9eea3717-af5"},"title":"Principal Solutions Engineer – AMBA VIP & System Design Verification Strategist","description":"<p>We are seeking a highly experienced and passionate verification expert who thrives at the intersection of technology leadership and customer engagement.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Leading end-to-end deployment and integration of Verification IP at strategic customer accounts, ensuring seamless adoption and success.</li>\n<li>Defining and implementing robust verification strategies for Arm-based SoCs, with a focus on interconnect and coherency protocol validation.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Deep expertise in UVM, SystemVerilog, and advanced protocol verification methodologies.</li>\n<li>Hands-on experience with Verification IPs (VIPs) and Transactors in both simulation and emulation environments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9eea3717-af5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/bengaluru/principal-solutions-engineer-amba-vip-and-system-design-verification-strategist/44408/90545855808","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"employee","x-salary-range":null,"x-skills-required":["UVM","SystemVerilog","Verification IPs (VIPs)"],"x-skills-preferred":["Arm-based architectures","interconnects and cache coherency protocols"],"datePosted":"2026-03-06T07:32:29.209Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"occupationalCategory":"Engineering","industry":"Technology","skills":"UVM, SystemVerilog, Verification IPs (VIPs), Arm-based architectures, interconnects and cache coherency protocols"}]}