{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/document-ai"},"x-facet":{"type":"skill","slug":"document-ai","display":"Document Ai","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58b03260-1e2"},"title":"AI Engineer, Product","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a global company with a diverse workforceREADME</p>\n<p>Embedded directly in a product team as search, chat, documents, or audio, you&#39;ll improve AI-powered features through rigorous evaluation, prompt and orchestration design, and rapid experimentation. You&#39;ll own your domain&#39;s AI quality end-to-end: define what &quot;good&quot; looks like, measure it, run experiments, and ship what works.</p>\n<p>Responsibilities</p>\n<p>• Design and run evaluations for your product area: reference tests, heuristics, model-graded checks tailored to search relevance, chat quality, document understanding, or audio performance.</p>\n<p>• Define and track metrics that matter: task success, helpfulness, hallucination proxies, safety flags, latency, cost.</p>\n<p>• Own prompt and orchestration design: write, test, and iterate on prompts and system prompts as a core part of your work.</p>\n<p>• Run A/B tests on prompts, models, and configurations; analyze results; make rollout or rollback decisions from data.</p>\n<p>• Set up observability for LLM calls: structured logging, tracing, dashboards, alerts.</p>\n<p>• Operate model releases: canary and shadow traffic, sign-offs, SLO-based rollback criteria, regression detection.</p>\n<p>• Improve core behaviors in your product area, whether that&#39;s memory policies, intent classification, routing, tool-call reliability, or retrieval quality.</p>\n<p>• Create templates and documentation so other teams can author evals and ship safely.</p>\n<p>• Partner with Science to diagnose regressions and lead post-mortems.</p>\n<p>About you</p>\n<p>• 3-4 years of experience; backgrounds that fit well include ML engineers moving closer to product, or software engineers with real AI/ML production experience.</p>\n<p>• Strong TypeScript or Python skills - we have both tracks depending on team fit.</p>\n<p>• Production LLM experience: prompts, tool/function calling, system prompts.</p>\n<p>• Hands-on with evals and A/B testing; you can design metrics, not just run them.</p>\n<p>• Comfortable implementing directly in product code, not only notebooks.</p>\n<p>• Observability experience: logging, tracing, dashboards, alerting.</p>\n<p>• Product mindset: form hypotheses, run experiments, interpret results, ship.</p>\n<p>• Clear communication, autonomous, and oriented toward production impact over experimentation for its own sake.</p>\n<p>It would be ideal if you also have:</p>\n<p>• Safety systems experience: moderation, PII handling/redaction, guardrails.</p>\n<p>• Release operations: canary/shadowing, automated rollbacks, experiment platforms.</p>\n<p>• Prior work on search ranking, chat systems, document AI, or audio ML features.</p>\n<p>Hiring Process</p>\n<p>• Introduction call - 30 min</p>\n<p>• Hiring Manager interview - 30 min</p>\n<p>• Technical Rounds - Live-coding Interview - 45 min - AI Engineering Interview - 45 min</p>\n<p>• Culture-fit discussion - 30 min</p>\n<p>• References</p>\n<p>By applying, you agree to our Applicant Privacy Policy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58b03260-1e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/c79ff8ed-6689-4dda-aec6-979a5dc767d0","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["TypeScript","Python","Production LLM experience","Evals and A/B testing","Observability","Product mindset","Clear communication"],"x-skills-preferred":["Safety systems experience","Release operations","Search ranking","Chat systems","Document AI","Audio ML features"],"datePosted":"2026-04-17T12:46:01.954Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Production LLM experience, Evals and A/B testing, Observability, Product mindset, Clear communication, Safety systems experience, Release operations, Search ranking, Chat systems, Document AI, Audio ML features"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6663d8f4-ea5"},"title":"AI Engineer, Product","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>\n<p>Role Summary</p>\n<p>Embedded directly in a product team as search, chat, documents, or audio, you&#39;ll improve AI-powered features through rigorous evaluation, prompt and orchestration design, and rapid experimentation. You&#39;ll own your domain&#39;s AI quality end-to-end: define what &#39;good&#39; looks like, measure it, run experiments, and ship what works.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Design and run evaluations for your product area: reference tests, heuristics, model-graded checks tailored to search relevance, chat quality, document understanding, or audio performance.</li>\n<li>Define and track metrics that matter: task success, helpfulness, hallucination proxies, safety flags, latency, cost.</li>\n<li>Own prompt and orchestration design: write, test, and iterate on prompts and system prompts as a core part of your work.</li>\n<li>Run A/B tests on prompts, models, and configurations; analyze results; make rollout or rollback decisions from data.</li>\n<li>Set up observability for LLM calls: structured logging, tracing, dashboards, alerts.</li>\n<li>Operate model releases: canary and shadow traffic, sign-offs, SLO-based rollback criteria, regression detection.</li>\n<li>Improve core behaviors in your product area, whether that&#39;s memory policies, intent classification, routing, tool-call reliability, or retrieval quality.</li>\n<li>Create templates and documentation so other teams can author evals and ship safely.</li>\n<li>Partner with Science to diagnose regressions and lead post-mortems.</li>\n</ul>\n<p>About You</p>\n<ul>\n<li>3-4 years of experience; backgrounds that fit well include ML engineers moving closer to product, or software engineers with real AI/ML production experience.</li>\n<li>Strong TypeScript or Python skills - we have both tracks depending on team fit.</li>\n<li>Production LLM experience: prompts, tool/function calling, system prompts.</li>\n<li>Hands-on with evals and A/B testing; you can design metrics, not just run them.</li>\n<li>Comfortable implementing directly in product code, not only notebooks.</li>\n<li>Observability experience: logging, tracing, dashboards, alerting.</li>\n<li>Product mindset: form hypotheses, run experiments, interpret results, ship.</li>\n<li>Clear communication, autonomous, and oriented toward production impact over experimentation for its own sake.</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Competitive salary and equity package</li>\n<li>Health insurance</li>\n<li>Transportation allowance</li>\n<li>Sport allowance</li>\n<li>Meal vouchers</li>\n<li>Private pension plan</li>\n<li>Generous parental leave policy</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6663d8f4-ea5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai"},"x-apply-url":"https://jobs.lever.co/mistral/c79ff8ed-6689-4dda-aec6-979a5dc767d0","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["TypeScript","Python","Production LLM experience","Evals and A/B testing","Observability","Product mindset"],"x-skills-preferred":["Safety systems experience","Release operations","Search ranking","Chat systems","Document AI","Audio ML features"],"datePosted":"2026-03-10T11:22:23.831Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Production LLM experience, Evals and A/B testing, Observability, Product mindset, Safety systems experience, Release operations, Search ranking, Chat systems, Document AI, Audio ML features"}]}