{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/continuous-batching"},"x-facet":{"type":"skill","slug":"continuous-batching","display":"Continuous Batching","count":1},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e37be4c0-4be"},"title":"AI Inference Engineer","description":"<p>Perplexity is looking for an AI Inference Engineer to join their team. The successful candidate will be responsible for developing APIs for AI inference, benchmarking and addressing bottlenecks throughout the inference stack, improving the reliability and observability of systems, and exploring novel research and implementing LLM inference optimisations.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As an AI Inference Engineer at Perplexity, you will have the opportunity to work on large-scale deployment of machine learning models for real-time inference. You will be responsible for developing APIs for AI inference that will be used by both internal and external customers.</p>\n<ul>\n<li>Develop APIs for AI inference that will be used by both internal and external customers</li>\n<li>Benchmark and address bottlenecks throughout our inference stack</li>\n<li>Improve the reliability and observability of our systems and respond to system outages</li>\n<li>Explore novel research and implement LLM inference optimisations</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need to have experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX), familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.), and understanding of GPU architectures or experience with GPU kernel programming using CUDA.</p>\n<ul>\n<li>Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)</li>\n<li>Familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.)</li>\n<li>Understanding of GPU architectures or experience with GPU kernel programming using CUDA</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e37be4c0-4be","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://www.perplexity.ai/","logo":"https://logos.yubhub.co/perplexity.ai.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/8a976851-9bef-4b07-8d36-567fa9540aef","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$220K – $405K","x-skills-required":["ML systems","deep learning frameworks","LLM architectures","inference optimisation techniques","GPU architectures","GPU kernel programming"],"x-skills-preferred":["continuous batching","quantisation","PyTorch","TensorFlow","ONNX"],"datePosted":"2026-03-04T12:24:24.046Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, New York City, Palo Alto"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, deep learning frameworks, LLM architectures, inference optimisation techniques, GPU architectures, GPU kernel programming, continuous batching, quantisation, PyTorch, TensorFlow, ONNX","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":405000,"unitText":"YEAR"}}}]}