{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/capacity-forecasting"},"x-facet":{"type":"skill","slug":"capacity-forecasting","display":"Capacity Forecasting","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45fb1c5c-dbe"},"title":"Supply Chain Manager","description":"<p>We&#39;re looking for a dynamic Supply Chain Manager to join Biffa Polymers in Redcar, leading the end-to-end operation across procurement, warehousing, and logistics.</p>\n<p>To provide operational leadership for the Supply Chain function to coordinate materials, warehousing, and fulfilment activities at Biffa Polymers Redcar, ensuring production schedules, inventory levels, and customer requirements are consistently met. This role combines strategic oversight and operational accountability, aligning procurement, warehousing, inventory, and transport to deliver business objectives.</p>\n<p>Your core responsibilities will include:</p>\n<ul>\n<li>Leading end-to-end supply chain operations, owning feedstock procurement from internal Biffa sites (MRFs/PRFs) and external suppliers, ensuring quality, cost-effectiveness, and alignment with production and customer demand</li>\n<li>Partnering with Commercial teams to coordinate customer deliveries, ensuring accuracy across documentation, compliance, and scheduling</li>\n<li>Taking ownership of New Product Introduction (NPI) activities and customer trials, acting as the central point of contact to ensure operational readiness across materials, production, and fulfilment</li>\n<li>Collaborating cross-functionally to deliver operational plans, proactively managing service performance, risks, and issues to meet and exceed contractual commitments</li>\n<li>Monitoring and driving Supply Chain KPIs, using data and performance metrics to improve service levels and operational efficiency</li>\n<li>Championing continuous improvement initiatives, identifying opportunities for cost reduction, process standardisation, and enhanced service delivery</li>\n<li>Owning the non-conformance (NCR) process, leading root cause analysis and implementing corrective and preventative actions with internal teams and external partners</li>\n<li>Supporting strategic supply chain planning, including ERP system integrity, capacity forecasting, team development, and active participation in health and safety initiatives</li>\n</ul>\n<p>Our essential requirements include:</p>\n<ul>\n<li>A degree-level qualification in Supply Chain or a related field</li>\n<li>Advanced IT skills across Microsoft Office (Word, Excel, Outlook, PowerPoint, Access)</li>\n<li>A full, current UK driving licence</li>\n<li>Minimum 5 years&#39; experience in a materials planning, fulfilment, supply chain, or operations management role</li>\n<li>Professional supply chain qualification (desirable)</li>\n<li>NVQ Level 3 in Management (desirable)</li>\n<li>Previous experience within the recycling industry (desirable)</li>\n</ul>\n<p>And here&#39;s why you&#39;ll love it at Biffa:</p>\n<ul>\n<li>Competitive salary</li>\n<li>Ongoing career development, training and coaching – because if you don’t grow, we don’t grow</li>\n<li>Generous pension scheme</li>\n<li>Retail and leisure discounts</li>\n<li>Holiday and travel discounts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45fb1c5c-dbe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Biffa Polymers","sameAs":"https://www.biffa.co.uk/","logo":"https://logos.yubhub.co/biffa.co.uk.png"},"x-apply-url":"https://apply.workable.com/j/B74C8E1DC6","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Supply Chain Management","Procurement","Warehousing","Logistics","Microsoft Office","ERP System Integrity","Capacity Forecasting","Team Development","Health and Safety"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:11:36.261Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redcar"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Manufacturing","skills":"Supply Chain Management, Procurement, Warehousing, Logistics, Microsoft Office, ERP System Integrity, Capacity Forecasting, Team Development, Health and Safety"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c095439-13b"},"title":"Principal Software Engineer","description":"<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>\n<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>\n<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>\n<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>\n<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>\n<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>\n<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>\n<p>This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>\n<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>\n<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>\n<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>\n<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>\n<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>\n<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>\n<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>\n<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>\n<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>\n<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>\n<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>\n<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>\n<p>Certain roles may be eligible for benefits and other compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c095439-13b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-41/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","NVIDIA Triton Inference Server","CUDA","TensorRT","Kafka","Flink","Spark Streaming","GPU inference frameworks","LLM inference optimization","model sharding","tensor/kv-cache parallelism","paged attention","continuous batching","quantization","AWQ/FP8","hybrid CPU–GPU orchestration","SLA-based capacity forecasting","autoscaling","performance telemetry","cross-functional architecture initiatives","technical mentorship","performance engineering","observability","deep systems debugging","low-level system and OS internals","multi-threading","process scheduling","NUMA-aware memory allocation","lock-free data structures","context switching","I/O stack tuning","kernel bypass","CPU/GPU affinity optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:57.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}}]}