{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/shell-basics"},"x-facet":{"type":"skill","slug":"shell-basics","display":"Shell basics","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9e0a391d-70f"},"title":"Data Quality Specialist","description":"<p>We&#39;re seeking highly motivated Data Quality Specialists with strong analytical skills and a keen eye for detail to join our Human Data Annotation team within the Science organisation.</p>\n<p>This is a hybrid quality reviewing and tooling role: you&#39;ll spend the majority of your time reviewing and auditing code annotations against rubrics to ensure data used for training and evaluating AI models meets a high bar, and the remainder building, maintaining, and troubleshooting the internal tooling that annotators rely on day-to-day.</p>\n<p>You&#39;ll collaborate closely with the annotators, technical program manager, and engineer stakeholders, and contribute to refining the guidelines and processes that shape how our data is produced.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Generate and validate high-quality data annotations, based on guidelines and continuous feedback, for the development and evaluation of AI models</li>\n<li>Surface systemic issues, edge cases, and gaps in guidelines back to annotation operations and technical stakeholders</li>\n<li>Produce annotations yourself when needed, modeling the quality bar expected of the team</li>\n<li>Build and maintain internal tools and automation that streamline annotator workflows such as visualization dashboards, batch configuration scripts, output management utilities, and similar</li>\n<li>Troubleshoot environment, tooling, and CLI/git issues for annotators on their local machines, liaising with IT and engineering as needed</li>\n</ul>\n<p><strong>About You</strong></p>\n<ul>\n<li>A degree in computer science, engineering, or a related field. Alternatively, 2 to 5 years of professional experience in software engineering, technical support, or developing tools</li>\n<li>Hands-on experience using code agents (e.g. Mistral’s vibe) in your own development workflow, and genuine interest in how they&#39;re evolving</li>\n<li>Proficient in at least one programming language (e.g. Python, JavaScript, or similar), with enough breadth to read and reason about code across a few core languages</li>\n<li>Able to apply consistent judgment against a rubric and surface edge cases, ambiguities, or gaps in guidelines</li>\n<li>Sustained focus and accuracy on detail-oriented, high-volume review work</li>\n<li>Comfortable working in a Unix-like terminal: shell basics, package managers, environment setup, and git workflows (branches, merges, resolving conflicts)</li>\n<li>Able to troubleshoot local development environment issues (dependencies, virtual environments, paths, permissions) across common operating systems</li>\n<li>Professional proficiency in English, with strong writing and comprehension skills</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience in data annotation for AI/ML, especially LLM training (SFT, RLHF, preference data), evals/benchmarks, or agentic data</li>\n<li>Experience building an annotation team through interviews and training</li>\n<li>Experience supporting technical users or troubleshooting developer environments (internal tools support, DevRel, teaching assistant for coding courses, etc.)</li>\n<li>Fluency across multiple programming languages, or domain depth in one of: frontend, backend, DevOps, MLOps, data engineering</li>\n<li>Familiarity with rubric-based evaluation concepts, inter-annotator agreement, or quality measurement for human-labeled data</li>\n<li>Experience developing, deploying, and managing internal tooling or automation scripts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9e0a391d-70f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral","sameAs":"https://mistral.com","logo":"https://logos.yubhub.co/mistral.com.png"},"x-apply-url":"https://jobs.lever.co/mistral/bd88179e-de69-4675-8a6c-74e2547a85ac","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive cash salary and equity","x-skills-required":["Python","JavaScript","Unix-like terminal","Git","Shell basics","Package managers","Environment setup"],"x-skills-preferred":[],"datePosted":"2026-04-24T16:06:20.618Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript, Unix-like terminal, Git, Shell basics, Package managers, Environment setup"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_037bb819-c51"},"title":"Data Quality Specialist","description":"<p>We&#39;re seeking highly motivated Data Quality Specialists with strong analytical skills and a keen eye for detail to join our Human Data Annotation team within the Science organisation.</p>\n<p>This is a hybrid quality reviewing and tooling role: you&#39;ll spend the majority of your time reviewing and auditing code annotations against rubrics to ensure data used for training and evaluating AI models meets a high bar, and the remainder building, maintaining, and troubleshooting the internal tooling that annotators rely on day-to-day.</p>\n<p>You&#39;ll collaborate closely with the annotators, technical program manager, and engineer stakeholders, and contribute to refining the guidelines and processes that shape how our data is produced.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Generate and validate high-quality data annotations, based on guidelines and continuous feedback, for the development and evaluation of AI models</li>\n<li>Surface systemic issues, edge cases, and gaps in guidelines back to annotation operations and technical stakeholders</li>\n<li>Produce annotations yourself when needed, modeling the quality bar expected of the team</li>\n<li>Build and maintain internal tools and automation that streamline annotator workflows such as visualization dashboards, batch configuration scripts, output management utilities, and similar</li>\n<li>Troubleshoot environment, tooling, and CLI/git issues for annotators on their local machines, liaising with IT and engineering as needed</li>\n</ul>\n<p><strong>About You</strong></p>\n<ul>\n<li>A degree in computer science, engineering, or a related field. Alternatively, 2 to 5 years of professional experience in software engineering, technical support, or developing tools</li>\n<li>Hands-on experience using code agents (e.g. Mistral’s vibe) in your own development workflow, and genuine interest in how they&#39;re evolving</li>\n<li>Proficient in at least one programming language (e.g. Python, JavaScript, or similar), with enough breadth to read and reason about code across a few core languages</li>\n<li>Able to apply consistent judgment against a rubric and surface edge cases, ambiguities, or gaps in guidelines</li>\n<li>Sustained focus and accuracy on detail-oriented, high-volume review work</li>\n<li>Comfortable working in a Unix-like terminal: shell basics, package managers, environment setup, and git workflows (branches, merges, resolving conflicts)</li>\n<li>Able to troubleshoot local development environment issues (dependencies, virtual environments, paths, permissions) across common operating systems</li>\n<li>Professional proficiency in English, with strong writing and comprehension skills</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience in data annotation for AI/ML, especially LLM training (SFT, RLHF, preference data), evals/benchmarks, or agentic data</li>\n<li>Experience building an annotation team through interviews and training</li>\n<li>Experience supporting technical users or troubleshooting developer environments (internal tools support, DevRel, teaching assistant for coding courses, etc.)</li>\n<li>Fluency across multiple programming languages, or domain depth in one of: frontend, backend, DevOps, MLOps, data engineering</li>\n<li>Familiarity with rubric-based evaluation concepts, inter-annotator agreement, or quality measurement for human-labeled data</li>\n<li>Experience developing, deploying, and managing internal tooling or automation scripts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_037bb819-c51","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral","sameAs":"https://mistral.com","logo":"https://logos.yubhub.co/mistral.com.png"},"x-apply-url":"https://jobs.lever.co/mistral/bd88179e-de69-4675-8a6c-74e2547a85ac","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive cash salary and equity","x-skills-required":["Python","JavaScript","Unix-like terminal","Git","Shell basics","Package managers","Environment setup"],"x-skills-preferred":["Data annotation for AI/ML","LLM training","Evals/benchmarks","Agentic data","Frontend","Backend","DevOps","MLOps","Data engineering"],"datePosted":"2026-04-24T13:11:57.969Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript, Unix-like terminal, Git, Shell basics, Package managers, Environment setup, Data annotation for AI/ML, LLM training, Evals/benchmarks, Agentic data, Frontend, Backend, DevOps, MLOps, Data engineering"}]}