{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/service-orchestration"},"x-facet":{"type":"skill","slug":"service-orchestration","display":"Service Orchestration","count":7},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c38cbb6f-4b7"},"title":"Staff Software Engineer, Inference","description":"<p>Job Title: Staff Software Engineer, Inference\\n\\nLocation: Dublin, IE\\n\\nDepartment: Software Engineering - Infrastructure\\n\\nJob Description:\\n\\nAbout Anthropic\\n\\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\\n\\nAbout the role:\\n\\nOur Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.\\n\\nThe team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.\\n\\nAs a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.\\n\\nStrong candidates may also have experience with:\\n\\n- High-performance, large-scale distributed systems\\n\\n- Implementing and deploying machine learning systems at scale\\n\\n- Load balancing, request routing, or traffic management systems\\n\\n- LLM inference optimization, batching, and caching strategies\\n\\n- Kubernetes and cloud infrastructure (AWS, GCP)\\n\\n- Python or Rust\\n\\nYou may be a good fit if you:\\n\\n- Have significant software engineering experience, particularly with distributed systems\\n\\n- Are results-oriented, with a bias towards flexibility and impact\\n\\n- Pick up slack, even if it goes outside your job description\\n\\n- Want to learn more about machine learning systems and infrastructure\\n\\n- Thrive in environments where technical excellence directly drives both business results and research breakthroughs\\n\\n- Care about the societal impacts of your work\\n\\nRepresentative projects across the org:\\n\\n- Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators\\n\\n- Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads\\n\\n- Building production-grade deployment pipelines for releasing new models to millions of users\\n\\n- Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage\\n\\n- Contributing to new inference features (e.g., structured sampling, prompt caching)\\n\\n- Supporting inference for new model architectures\\n\\n- Analyzing observability data to tune performance based on real-world production workloads\\n\\n- Managing multi-region deployments and geographic routing for global customers\\n\\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\\n\\nThe annual compensation range for this role is listed below.\\n\\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\n\\nAnnual Salary:€295.000-€355.000 EUR\\n\\nLogistics\\n\\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\\n\\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\\n\\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\\n\\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\\n\\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\\n\\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\\n\\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\\n\\nHow we&#39;re different\\n\\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\\n\\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\\n\\nCome work with us!\\n\\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c38cbb6f-4b7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5150472008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"€295.000-€355.000 EUR","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:00.340Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32c0c69a-037"},"title":"Staff Software Engineer, Inference","description":"<p><strong>About the role:</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Work end to end on identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research</li>\n<li>Collaborate with the team to design and implement solutions to complex problems</li>\n<li>Develop and maintain large-scale distributed systems</li>\n<li>Implement and deploy machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Significant software engineering experience, particularly with distributed systems</li>\n<li>Results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p><strong>Application Instructions:</strong></p>\n<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32c0c69a-037","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5150472008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"€295.000-€355.000 EUR","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:14.384Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e394b0fa-2ba"},"title":"Staff Software Engineer, Inference","description":"<p><strong>About the role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Representative projects across the org</strong></p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p><strong>Deadline to apply</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Annual compensation range</strong></p>\n<p>The annual compensation range for this role is £325,000-£390,000 GBP.</p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>Why work with us?</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e394b0fa-2ba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097742008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":["high-performance distributed systems","machine learning systems","load balancing","request routing","traffic management","caching strategies"],"datePosted":"2026-04-18T15:50:52.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust, high-performance distributed systems, machine learning systems, load balancing, request routing, traffic management, caching strategies","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8db20763-21b"},"title":"AI Product Owner - Operations","description":"<p>The Role</p>\n<p>Belong is building the Residential Operating System: a fully integrated, AI-powered platform that manages homes, coordinates thousands of real-world service moments, and creates authentic belonging experiences for homeowners and residents. The member journey is the product. But the Residential OS only delivers on that promise if the operational machinery running beneath it is intelligent, instrumented, and self-improving.</p>\n<p>Most companies say they are AI-first. At Belong, it means something specific: by the end of 2025, the majority of communications across sales, leasing, homecare, and concierge functions are AI-generated. Human Advisors and Concierges handle trust-critical moments. AI agents handle everything else: triage, scheduling, status updates, escalation routing, vendor coordination, documentation.</p>\n<p>The operations product surface is where that architecture lives or dies. As Product Owner, Operations, your job is to design, deploy, and relentlessly improve the AI-powered system that runs the homeowner and resident journey from inspection through occupancy. You are not writing requirements for a future that engineers will build someday. You are shipping agent-driven workflows today, measuring their quality and deflection rates next week, and iterating the week after.</p>\n<p>This role is for someone who understands that the frontier of operations is not better dashboards. It is autonomous systems that perform with the judgment of your best operator, at infinite scale, at the moment the member needs it.</p>\n<p>Responsibilities</p>\n<ul>\n<li><p>AI agent architecture across the operational journey. Every operational phase, from home preparation, move-in orchestration, homecare and maintenance, to Pro coordination and vendor scheduling, has a human workflow today and an AI-assisted target state. You will define that target state phase by phase: what the agent handles autonomously, what triggers human review, what escalates immediately.</p>\n</li>\n<li><p>The agent-human handoff model. The Member Journey Brief is explicit: humans are deployed at trust-critical moments. AI handles orchestration, speed, and precision behind the scenes. You are the person who defines exactly where that line sits, and who moves it systematically as agent quality improves.</p>\n</li>\n<li><p>LLM-powered communication workflows. Belong&#39;s target is 80% AI-generated communications across operational functions by Q3. You will own the product layer that makes this real for operations: the prompt architecture, context retrieval pipelines, output quality review systems, and the feedback loops that improve generation quality over time.</p>\n</li>\n<li><p>Foundation as the AI control panel. Foundation is where Belong&#39;s operational teams live. Every tool your squad ships into Foundation is either creating leverage for humans or replacing manual work with agent-driven automation. You will define the roadmap for Foundation&#39;s evolution from task management system to AI control panel: where agents surface for review, where exceptions queue for human action, where quality scores and deflection rates are visible in real time.</p>\n</li>\n<li><p>Operational instrumentation and model feedback. AI systems degrade without structured feedback. You will build the instrumentation that captures ground truth: CSAT signals, escalation rates, rework rates, SLA breach patterns, and member sentiment. You will design the feedback loops that push this signal back into model evaluation and prompt improvement.</p>\n</li>\n</ul>\n<p>The AI Stack You Will Work With</p>\n<ul>\n<li>LLM-based communication generation with context injection from CRM and operational state</li>\n<li>Agentic scheduling and coordination workflows (Homecare triage, Pro dispatch, vendor coordination)</li>\n<li>Automated escalation routing based on signal classification</li>\n<li>Quality scoring and anomaly detection on agent outputs</li>\n<li>Retrieval-augmented generation for Concierge and Homecare agent context</li>\n</ul>\n<p>What Success Looks Like</p>\n<ul>\n<li>90 days: Every operational phase has a documented AI target state with defined autonomous scope, human escalation thresholds, and instrumentation in place.</li>\n<li>6 months: AI-assisted workflows have measurably reduced manual communication volume across at least 2 operational functions with no CSAT degradation.</li>\n<li>Year 1: The majority of routine operational communications in your product surface are AI-generated. Human operators are handling exceptions, escalations, and trust-critical moments, nothing else.</li>\n</ul>\n<p>Example KPIs You Will Be Held To</p>\n<ul>\n<li>AI deflection rate vs. manual handling baseline, by operational function</li>\n<li>CSAT from homeowners and residents at each operational phase (the constraint: deflection gains cannot come at CSAT cost)</li>\n<li>SLA compliance rates for homecare and Pro services</li>\n<li>Time-to-list (inspection to live listing)</li>\n<li>Move-in readiness rate and failed move-in rate</li>\n<li>Human escalation rate as a quality signal on agent confidence calibration</li>\n</ul>\n<p>Who You Are</p>\n<ul>\n<li>AI systems thinker. You do not think about AI features. You think about AI systems: input context, output quality, fallback behavior, quality measurement, and continuous improvement loops.</li>\n<li>Operationally grounded. You have worked in environments where things break in the real world, with real vendors, real homes, real members, and you understand that an agent operating without the right context is more dangerous than no agent at all.</li>\n<li>Outcome obsessed. You hold deflection rate and CSAT simultaneously. You do not celebrate automation that degrades experience.</li>\n<li>Technically fluent. You can write a SQL query, read a vector similarity result, reason about retrieval quality, and understand the tradeoffs in a prompt engineering decision.</li>\n<li>Cross-functional driver. Operations, Homecare, Leasing, Vendor Ops, and Engineering all touch your surface. You run the rituals, translate across languages, and hold the delivery cadence.</li>\n</ul>\n<p>What You Bring</p>\n<ul>\n<li>3 to 5 years of product experience, with at least 1 to 2 years directly building or operating AI-powered products in a production environment</li>\n<li>Hands-on experience with LLM integrations, prompt engineering, RAG pipelines, or agentic workflow design</li>\n<li>Demonstrated ownership of operational tooling or service orchestration products in a marketplace, logistics, or operations-intensive environment</li>\n<li>Proficiency with data: SQL, funnel analysis, and the ability to detect when a metric is being gamed or misread</li>\n<li>Experience with AI evaluation frameworks and output quality measurement is a strong advantage</li>\n<li>Prior work in consumer real estate, hospitality, or residential services is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8db20763-21b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Belong","sameAs":"https://www.belong.com/","logo":"https://logos.yubhub.co/belong.com.png"},"x-apply-url":"https://jobs.lever.co/belong/12878464-3397-4603-91fd-a4645ee06afe","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI systems thinking","LLM-powered workflows","Agentic scheduling and coordination workflows","Automated escalation routing","Quality scoring and anomaly detection","Retrieval-augmented generation","SQL","Funnel analysis","Data analysis","Prompt engineering","RAG pipelines","Agentic workflow design","Operational tooling","Service orchestration","Consumer real estate","Hospitality","Residential services"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:01.211Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Argentina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI systems thinking, LLM-powered workflows, Agentic scheduling and coordination workflows, Automated escalation routing, Quality scoring and anomaly detection, Retrieval-augmented generation, SQL, Funnel analysis, Data analysis, Prompt engineering, RAG pipelines, Agentic workflow design, Operational tooling, Service orchestration, Consumer real estate, Hospitality, Residential services"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a17f244f-262"},"title":"Associate Software Engineer (C#)","description":"<p><strong>About this role</strong></p>\n<p>In the Aladdin Product Group, Alternatives department, we are seeking a Full Stack Developer to grow and expand our Software Engineering team. You will help the growth of our private market&#39;s platform - eFront Invest.</p>\n<p>eFront, a part of BlackRock, is a leading software provider of end-to-end solutions for the Alternative Investment players. Fully integrated into our Aladdin eFront Engineering Team, you are exposed to both the technical and functional layers of our most innovative products, while acquiring outstanding abilities in the fast-growing Private Equity industry, and Alternative Investments in general.</p>\n<p><strong>What will you be doing?</strong></p>\n<ul>\n<li>You are building new features, from their conception up to their deployment in production.</li>\n<li>You handle aspects of a SaaS product, including production monitoring and incident resolution on the cloud platform. You integrate your work in the team methodologies: continuous integration/continuous delivery, automated testing, standard processes&#39; definition.</li>\n<li>You are engaging with different groups, full of hardworking, forward-thinking people with an outstanding innovation spirit.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You have a Bachelor or Master in Engineering, Computer Sciences, Mathematics, or related software engineering background.</li>\n<li>Validated experience in software development.</li>\n<li>Ability to autonomously dig into an existing codebase and understand its concepts.</li>\n<li>Curiosity about the functional part of the product, base knowledge about the Finance industry will be highly appreciated.</li>\n<li>Strong analytical and problem-solving skills; proactive approach with ability to balance multiple projects simultaneously.</li>\n<li>Proficient English, both written and spoke</li>\n</ul>\n<p><strong>Our benefits</strong></p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p><strong>Our hybrid work model</strong></p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a17f244f-262","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/h9psWf8DRkgYNDCoLaX62Y/associate-software-engineer-(c%23)-in-london-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#","Visual Studio","VB .NET",".NET Framework",".NET Core","MS SQL Server","TypeScript","JavaScript","CSS","Html","Cloud based services","AWS","Azure","Cloud-Native distributed containerized microservice orchestration","Agile (Scrum)"],"x-skills-preferred":["Cloud-Native distributed containerized microservice orchestration","Agile (Scrum)"],"datePosted":"2026-03-09T16:40:54.514Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C#, Visual Studio, VB .NET, .NET Framework, .NET Core, MS SQL Server, TypeScript, JavaScript, CSS, Html, Cloud based services, AWS, Azure, Cloud-Native distributed containerized microservice orchestration, Agile (Scrum), Cloud-Native distributed containerized microservice orchestration, Agile (Scrum)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cefc0fee-e40"},"title":"Associate, Full-Stack Software Engineer (Python & React)","description":"<p>About this role</p>\n<p>This role sits within Preqin, a part of BlackRock. Preqin plays a key role in how we are revolutionizing private markets data and technology for clients globally, complementing our existing Aladdin technology platform to deliver investment solutions for the whole portfolio.</p>\n<p>As Associate, Engineer on the Orion data team, you will guide a high-performing engineering pod responsible for the technical and operational excellence of our data architecture. Your work ensures our data flows are robust, scalable, and aligned with business needs, forming the backbone of Orion’s data-driven products and insights. Your focus will be on delivering high-quality solutions for our stakeholders by leveraging strong data literacy, product awareness and clear communication. You will collaborate closely with product managers and data owner to ideate, design, and deliver new features, while actively shaping the direction of our technical strategy and data platform.</p>\n<p>Key responsibilities will include:</p>\n<ul>\n<li><p>Develop workflows that seamlessly combine AI/ML with human expertise to accelerate data collection and improve decision-making processes.</p>\n</li>\n<li><p>Prioritize work based on data-driven insights and outcome-based goals in collaboration with stakeholders.</p>\n</li>\n<li><p>Design and implementation of scalable, reliable data pipelines that ingest, process, and deliver high quality data to downstream applications and analytics platforms.</p>\n</li>\n<li><p>Work closely with engineering teams across the business, ensuring the best technical solutions are adopted, and elevate development standards through knowledge sharing and best practices.</p>\n</li>\n<li><p>Collaborate across engineering, product, and data scientist teams to translate business requirements into technical solutions and ensuring our data assets are organized and accessible.</p>\n</li>\n<li><p>Mentor and guide team members, fostering a culture of continuous improvement, innovation, and open communication.</p>\n</li>\n<li><p>Actively participate in technical discussions about new product directions, data modelling, and architectural decisions, ensuring our technology platform remains extensible.</p>\n</li>\n<li><p>Lead an engineering pod using strong leadership and influence skills.</p>\n</li>\n<li><p>Manage of a team of junior and mid-level engineers, supporting their careers and growth.</p>\n</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li><p>3+ years’ experience in software engineering.</p>\n</li>\n<li><p>Strong technical ability across the full stack: Python, FastAPI, React and Typescript is a plus.</p>\n</li>\n<li><p>Experience with databases like Postgres and Snowflake.</p>\n</li>\n<li><p>Experience of working within cloud provider services – Azure or AWS (preferred) and utilization of infrastructure as code.</p>\n</li>\n<li><p>A data-driven mindset to make development decisions based on robust analyses.</p>\n</li>\n<li><p>Ability to collaborate effectively with designers, engineering and data scientist teams to build our technical solutions.</p>\n</li>\n<li><p>You have driven technical solution design, taking the balance of engineering quality, testing, scalability and security into consideration.</p>\n</li>\n<li><p>A “let’s do it” and “challenge accepted” attitude when faced with less known or challenging tasks, with a willingness to learn new technologies and ways of working.</p>\n</li>\n<li><p>Excellent verbal and written communication and interpersonal skills, with the ability to influence at all organizational levels and bridge technical perspectives.</p>\n</li>\n<li><p>Proficiency in English required; additional languages and prior work experience at a global firm are desirable.</p>\n</li>\n<li><p>People management experience.</p>\n</li>\n<li><p>Experience with AI-related projects/products.</p>\n</li>\n<li><p>Knowledge of Infrastructure as Code (IaC) tools for provisioning cloud resources, CI/CD pipelines, and Cloud-Native distributed containerized microservice orchestration.</p>\n</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cefc0fee-e40","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/rF3NUqLpdmQwRkQEgaTaEb/associate%2C-full-stack-software-engineer-(python-%26amp%3B-react)-in-london-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","FastAPI","React","Typescript","Postgres","Snowflake","Azure","AWS","Infrastructure as Code","CI/CD pipelines","Cloud-Native distributed containerized microservice orchestration"],"x-skills-preferred":["AI/ML","Data science","Cloud computing","DevOps","Agile development"],"datePosted":"2026-03-09T16:40:36.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, FastAPI, React, Typescript, Postgres, Snowflake, Azure, AWS, Infrastructure as Code, CI/CD pipelines, Cloud-Native distributed containerized microservice orchestration, AI/ML, Data science, Cloud computing, DevOps, Agile development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f95fe525-8fd"},"title":"Staff Software Engineer, Inference","description":"<p><strong>About the role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p><strong>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</strong></p>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Representative projects across the org</strong></p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p><strong>Deadline to apply: None. Applications will be reviewed on a rolling basis.</strong></p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f95fe525-8fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097742008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000 - £390,000GBP","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":["high-performance, large-scale distributed systems","implementing and deploying machine learning systems at scale","load balancing, request routing, or traffic management systems","caching strategies"],"datePosted":"2026-03-08T13:49:42.673Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust, high-performance, large-scale distributed systems, implementing and deploying machine learning systems at scale, load balancing, request routing, or traffic management systems, caching strategies","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}}]}