<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>57bec705-ffa</externalid>
      <Title>Staff Product Partnerships Manager - Mercury Bank</Title>
      <Description><![CDATA[<p>As crews build a whole stack of financial tools for startups, underneath all of our products is a massive web of partners that users don&#39;t see, requiring significant product, regulatory, and operational investment. That&#39;s where the Product Partnerships team comes in. The Product Partnerships team maintains Mercury&#39;s external relationships and streamlines our collective workflows to keep customers, partners, and Mercury employees happy.</p>
<p>As Staff Product Partnerships Manager - Mercury Bank, you&#39;ll own and evolve Mercury&#39;s portfolio of strategic partners that underpin our core banking infrastructure and our path toward becoming a bank. This includes relationships with core banking providers, infrastructure partners, and other critical regulatory and ecosystem partners required to support a chartered bank model.</p>
<p>This is a highly cross-functional role that sits at the intersection of product, engineering, compliance, legal, operations, and executive leadership. You&#39;ll serve as the primary point of contact for these partners while internally championing their capabilities, constraints, and requirements. You&#39;ll be responsible for translating complex regulatory and technical partner considerations into clear internal workstreams and driving them through execution with alignment and rigor.</p>
<p>Beyond managing existing partnerships, you&#39;ll play a critical role in identifying, evaluating, and onboarding new partners that support Mercury&#39;s long-term banking strategy. This includes market mapping, due diligence, contract negotiation, and ongoing relationship management. You&#39;ll bring a thoughtful mix of regulatory fluency, product curiosity, operational depth, and relationship skills to help Mercury build durable, compliant, and scalable banking infrastructure.</p>
<p>Secure, reliable, thoughtful, and (perhaps) magical is how a user should describe banking on Mercury. Your job is to ensure that our core banking and regulatory partners can live up to this standard as we scale.</p>
<p>Responsibilities:</p>
<p>Manage a portfolio of strategic core banking, card, and regulatory partnerships critical to Mercury&#39;s banking infrastructure
Be the driving force behind building and maintaining Mercury&#39;s core banking partner ecosystem in support of our charter ambitions
Serve as the internal expert on core banking systems, regulated financial institutions, and the broader banking infrastructure ecosystem
Work closely with Legal, Compliance, Risk, Product, Engineering, and Finance to negotiate contracts, manage partner performance, and support regulatory readiness
Lead new partner selection, due diligence, and onboarding for core banking and regulatory partnerships
Translate partner requirements, constraints, and regulatory considerations into actionable internal plans and timelines
Unblock internal teams wherever possible and advocate for Mercury&#39;s roadmap while balancing regulatory and partner expectations
Clearly communicate Mercury&#39;s compliance posture, regulatory obligations, and technical architecture to external partners</p>
<p>Requirements:</p>
<p>Have 8+ years of relevant experience in banking, fintech, or financial services, with deep familiarity working with regulated financial institutions
Have experience managing or working closely with core banking providers, issuing banks, or other regulated financial infrastructure partners
Be a strong partnership leader who enjoys owning complex, high-stakes relationships
Be an excellent communicator and highly organized project manager, comfortable operating across many stakeholders
Consistently exercise empathy, especially in highly regulated and constrained environments
Have a strong product sense and interest in how financial infrastructure enables customer outcomes
Be an effective negotiator with experience navigating complex commercial and regulatory discussions
Be technically inclined or comfortable interfacing with engineering and compliance teams on complex systems
Stay calm and focused while working on multiple critical initiatives in parallel
Exercise creativity while operating within regulatory and operational constraints
Think of customers first, always approaching problems from the customer perspective
Be able to simplify complex systems and regulatory requirements into clear, documented processes</p>
<p>Total Rewards Package:
The total rewards package at Mercury includes base salary, equity (stock options), and benefits.
Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate&#39;s experience, expertise, geographic location, and internal pay equity relative to peers.
Our target new hire base salary ranges for this role are the following:
US employees in New York City, Seattle, Los Angeles or San Francisco: $220,800 - $276,000
US employees outside of New York City, Seattle, Los Angeles or San Francisco: $198,700 - $248,400</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$220,800 - $276,000 (US employees in New York City, Seattle, Los Angeles or San Francisco)</Salaryrange>
      <Skills>banking, fintech, financial services, regulated financial institutions, core banking providers, infrastructure partners, regulatory and ecosystem partners, product management, engineering, compliance, legal, operations, executive leadership, market mapping, due diligence, contract negotiation, relationship management, regulatory fluency, product curiosity, operational depth, relationship skills, financial infrastructure, customer outcomes, negotiation, complex commercial and regulatory discussions, technical architecture</Skills>
      <Category>Finance</Category>
      <Industry>Fintech</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/demo.mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company building a whole stack of financial tools for startups.</Employerdescription>
      <Employerwebsite>https://www.demo.mercury.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5789886004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>290c3d28-4b2</externalid>
      <Title>Partner Solution Architect - ASEAN</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany and Singapore. We are a diverse workforce that thrives in competitive environments and is committed to driving innovation.</p>
<p>Why This Role Matters</p>
<p>You will be the technical linchpin between Mistral and our strategic partners in ASEAN (Nvidia, Dell, Hyperscalers, Global System Integrators), translating our open-weight models and sovereign AI architecture into deployable, scalable solutions.</p>
<p>By designing joint architectures, influencing partner GTM motions, and earning a seat at the CIO/CTO table, you will accelerate Mistral’s technical credibility and deployment velocity across Asia Pacific.</p>
<p>This is a foundational role where you will define how open-weight AI is operationalized at scale in the region.</p>
<p>What You Will Do</p>
<p><strong>Partner Technical Leadership &amp; Architecture Design</strong></p>
<ul>
<li>Lead the technical design, deployment, and enablement of Mistral’s partner solutions, bridging our AI models with partner infrastructure (Nvidia, Dell, Hyperscalers, GSIs) to deliver scalable AI Labs, AI Factories, and sovereign AI architectures.</li>
</ul>
<ul>
<li>Serve as the trusted technical advisor to partner CTOs, CIOs, and engineering leaders—shaping joint architectures, guiding GPU/model deployment strategies, and accelerating GTM execution.</li>
</ul>
<ul>
<li>Design reference architectures and deployment patterns for partner-led implementations (e.g., multi-GPU inference clusters, AI Lab topologies, private AI clouds).</li>
</ul>
<ul>
<li>Innovate the Executive Briefing Center (EBC) function for technical leaders (CIOs, CTOs, CDOs), positioning Mistral as the default choice for enterprise AI.</li>
</ul>
<ul>
<li>Co-design sovereign AI reference architectures with Nvidia and Dell (H100, H200, GB200 platforms).</li>
</ul>
<p><strong>Co-Sell &amp; Revenue Enablement</strong></p>
<ul>
<li>Collaborate with Mistral’s partner and sales teams to progress deals, providing technical expertise to penetrate accounts and influence GTM pipeline.</li>
</ul>
<ul>
<li>Support partners in qualifying/disqualifying opportunities, ensuring Mistral solutions unlock maximum value for customers.</li>
</ul>
<ul>
<li>Deploy Mistral’s enterprise AI suite (models, fine-tuning, use-case building) in partner-led environments, tailoring solutions to customer requirements.</li>
</ul>
<p><strong>Trusted Advisor &amp; Lighthouse Implementations</strong></p>
<ul>
<li>Drive strategic partner-led opportunities through technical discovery, architecture design, and POC execution.</li>
</ul>
<ul>
<li>Lead lighthouse deployments that become referenceable case studies (e.g., Singtel AI Grid, Accenture AI Lab).</li>
</ul>
<ul>
<li>Establish a scalable partner enablement framework, training 100+ partner engineers across ASEAN.</li>
</ul>
<p><strong>Product Feedback &amp; Internal Collaboration</strong></p>
<ul>
<li>Coordinate with Mistral’s product and engineering teams to relay partner-specific requirements and feedback.</li>
</ul>
<ul>
<li>Align joint GTM and technical execution between Mistral Science, Partner Engineering, and partner field teams.</li>
</ul>
<p>About You</p>
<p><strong>Must-Have</strong></p>
<ul>
<li>10–15 years’ experience in partner-facing technical sales or solution architecture (e.g., Partner SA, Alliance Architect, Partner Technology Strategist).</li>
</ul>
<ul>
<li>Proven ability to engage C-suite and senior technical stakeholders (CTO, CIO, Chief Architect) in strategic architecture discussions.</li>
</ul>
<ul>
<li>Deep GenAI/LLM expertise: RAG, fine-tuning, prompt engineering, model evaluation, and deployment patterns.</li>
</ul>
<ul>
<li>Technical mastery of AI/ML infrastructure (GPU clusters, cloud platforms, model deployment frameworks).</li>
</ul>
<ul>
<li>Track record of co-designing/deploying joint solutions with ecosystem partners (Nvidia, Dell, AWS, Accenture, etc.).</li>
</ul>
<ul>
<li>Executive communication: Ability to articulate science-driven value propositions to technical and business audiences.</li>
</ul>
<ul>
<li>Entrepreneurial mindset: Operates autonomously in high-growth environments; creates playbooks, not follows them.</li>
</ul>
<ul>
<li>Fluent in English; confident working across diverse, cross-cultural teams in Asia.</li>
</ul>
<p><strong>Nice-to-Have</strong></p>
<ul>
<li>Experience with open-weight LLMs or open-source AI stacks (Mistral, Hugging Face, LangChain, vLLM, RAG frameworks).</li>
</ul>
<ul>
<li>Prior involvement in AI Lab, AI Factory, or Sovereign Cloud deployments.</li>
</ul>
<ul>
<li>Familiarity with data governance, model evaluation, and GPU sizing for large-scale inference.</li>
</ul>
<ul>
<li>Network across GSIs and infrastructure partners in Asia</li>
</ul>
<ul>
<li>Exposure to multi-region partner programs or joint GTM initiatives in APJ.</li>
</ul>
<ul>
<li>Bonus languages: Korean, Japanese, or Mandarin for regional partner engagement.</li>
</ul>
<p>What we offer</p>
<ul>
<li>💰 Competitive cash salary and equity</li>
</ul>
<ul>
<li>🚑 Health Insurance : Best in Class</li>
</ul>
<ul>
<li>🥎 Sport : $90 for gym membership allowance</li>
</ul>
<ul>
<li>🥕 Food : $200 monthly allowance for meals (solution might evolve as we grow bigger)</li>
</ul>
<ul>
<li>🚴 Transportation : $120/month for public transport or Parking charges reimbursed</li>
</ul>
<ul>
<li>🏝️ PTO: 18 per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GenAI/LLM expertise, RAG, fine-tuning, prompt engineering, model evaluation, deployment patterns, AI/ML infrastructure, GPU clusters, cloud platforms, model deployment frameworks, co-designing/deploying joint solutions, ecosystem partners, Nvidia, Dell, AWS, Accenture, open-weight LLMs, open-source AI stacks, Mistral, Hugging Face, LangChain, vLLM, RAG frameworks, data governance, model evaluation, GPU sizing, large-scale inference, GSIs, infrastructure partners, multi-region partner programs, joint GTM initiatives, APJ, Korean, Japanese, Mandarin</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/fe3542b5-4f99-4d62-af6a-fbdfd13bf0e4</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>7b2b97d5-0a1</externalid>
      <Title>Software Engineer, Inference Deployment</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>Our mandate is to make inference deployment boring and unattended.</p>
<p>Anthropic serves Claude to millions of users across GPUs, TPUs, and Trainium — and every model update must reach production safely, quickly, and without disrupting service. We&#39;re building the systems that make inference deployment continuous and unattended.</p>
<p>As a Software Engineer on the Launch Engineering team, you&#39;ll design and build the deployment infrastructure that moves inference code from merge to production. This is a resource-constrained optimization problem at its core: validation and deployment consume the same accelerator chips that serve customer traffic — your deploys compete with live user requests for the same hardware. Every model brings different fleet sizes, startup times, and correctness requirements, so the system must adapt continuously. You&#39;ll build systems that navigate these constraints — orchestrating validation, scheduling deployments intelligently, and driving down cycle time from merge to production.</p>
<p>If you&#39;ve built deployment systems at scale and gravitate toward the hardest problems at the intersection of automation and resource management, this team will give you an outsized scope to work on them.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li><strong>Own deployment orchestration</strong> that continuously moves validated inference builds into production across GPU, TPU, and Trainium fleets, unattended under normal conditions</li>
<li><strong>Improve capacity-aware deployment scheduling</strong> to maximize deployment throughput against constrained accelerator budgets and variable fleet sizes</li>
<li><strong>Extend deployment observability</strong> — dashboards and tooling that answer &quot;what code is running in production,&quot; &quot;where is my commit,&quot; and &quot;what validation passed for this deploy&quot;</li>
<li><strong>Drive down cycle time</strong> from code merge to production with pipeline architectures that minimize serial dependencies and maximize parallelism</li>
<li><strong>Optimize fleet rollout strategies</strong> for large-scale deployments across thousands of GPU, TPU, and Trainium chips, minimizing disruption to serving capacity</li>
<li><strong>Evolve self-service model onboarding</strong> so that new models can be added to the continuous deployment pipeline without Launch Engineering involvement</li>
<li><strong>Partner across the Inference organization</strong> with teams owning validation, autoscaling, and model routing to integrate deployment automation with their systems</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li>5+ years of experience building deployment, release, or delivery infrastructure at scale</li>
<li>Strong software engineering skills with experience designing systems that manage complex state machines and multi-stage pipelines</li>
<li>Experience with deployment systems where resource constraints shape the design — whether that&#39;s fleet capacity, network bandwidth, hardware availability, or coordinated rollout windows</li>
<li>A track record of building automation that measurably improves deployment velocity and reliability</li>
<li>Proficiency with Kubernetes-based deployments, rolling update mechanics, and container orchestration</li>
<li>Comfort working across the stack — from backend services and databases to CLI tools and web UIs</li>
<li>Strong communication skills and the ability to work closely with oncall engineers, model teams, and infrastructure partners</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with ML inference or training infrastructure deployment, particularly across multiple accelerator types (GPU, TPU, Trainium)</li>
<li>Background in capacity planning or resource-constrained scheduling (e.g., bin-packing, fleet management, job scheduling with hardware affinity)</li>
<li>Experience with progressive delivery in systems with long validation cycles: canary/soak testing, blue-green deployments, traffic shifting, automated rollback</li>
<li>Experience at companies with large-scale release engineering challenges (mobile release trains, monorepo deployments, multi-datacenter rollouts)</li>
<li>Experience with Python and/or Rust in production systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $485,000USD</Salaryrange>
      <Skills>deployment, release, delivery, infrastructure, Kubernetes, container, orchestration, pipelines, state machines, multi-stage, pipelines, parallelism, optimization, resource management, automation, velocity, reliability, communication, collaboration, oncall, model teams, infrastructure partners, ML inference, training infrastructure, capacity planning, resource-constrained scheduling, bin-packing, fleet management, job scheduling, hardware affinity, progressive delivery, canary/soak testing, blue-green deployments, traffic shifting, automated rollback, mobile release trains, monorepo deployments, multi-datacenter rollouts, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111745008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>