{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/environment-setup"},"x-facet":{"type":"skill","slug":"environment-setup","display":"Environment Setup","count":3},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9e0a391d-70f"},"title":"Data Quality Specialist","description":"<p>We&#39;re seeking highly motivated Data Quality Specialists with strong analytical skills and a keen eye for detail to join our Human Data Annotation team within the Science organisation.</p>\n<p>This is a hybrid quality reviewing and tooling role: you&#39;ll spend the majority of your time reviewing and auditing code annotations against rubrics to ensure data used for training and evaluating AI models meets a high bar, and the remainder building, maintaining, and troubleshooting the internal tooling that annotators rely on day-to-day.</p>\n<p>You&#39;ll collaborate closely with the annotators, technical program manager, and engineer stakeholders, and contribute to refining the guidelines and processes that shape how our data is produced.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Generate and validate high-quality data annotations, based on guidelines and continuous feedback, for the development and evaluation of AI models</li>\n<li>Surface systemic issues, edge cases, and gaps in guidelines back to annotation operations and technical stakeholders</li>\n<li>Produce annotations yourself when needed, modeling the quality bar expected of the team</li>\n<li>Build and maintain internal tools and automation that streamline annotator workflows such as visualization dashboards, batch configuration scripts, output management utilities, and similar</li>\n<li>Troubleshoot environment, tooling, and CLI/git issues for annotators on their local machines, liaising with IT and engineering as needed</li>\n</ul>\n<p><strong>About You</strong></p>\n<ul>\n<li>A degree in computer science, engineering, or a related field. Alternatively, 2 to 5 years of professional experience in software engineering, technical support, or developing tools</li>\n<li>Hands-on experience using code agents (e.g. Mistral’s vibe) in your own development workflow, and genuine interest in how they&#39;re evolving</li>\n<li>Proficient in at least one programming language (e.g. Python, JavaScript, or similar), with enough breadth to read and reason about code across a few core languages</li>\n<li>Able to apply consistent judgment against a rubric and surface edge cases, ambiguities, or gaps in guidelines</li>\n<li>Sustained focus and accuracy on detail-oriented, high-volume review work</li>\n<li>Comfortable working in a Unix-like terminal: shell basics, package managers, environment setup, and git workflows (branches, merges, resolving conflicts)</li>\n<li>Able to troubleshoot local development environment issues (dependencies, virtual environments, paths, permissions) across common operating systems</li>\n<li>Professional proficiency in English, with strong writing and comprehension skills</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience in data annotation for AI/ML, especially LLM training (SFT, RLHF, preference data), evals/benchmarks, or agentic data</li>\n<li>Experience building an annotation team through interviews and training</li>\n<li>Experience supporting technical users or troubleshooting developer environments (internal tools support, DevRel, teaching assistant for coding courses, etc.)</li>\n<li>Fluency across multiple programming languages, or domain depth in one of: frontend, backend, DevOps, MLOps, data engineering</li>\n<li>Familiarity with rubric-based evaluation concepts, inter-annotator agreement, or quality measurement for human-labeled data</li>\n<li>Experience developing, deploying, and managing internal tooling or automation scripts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9e0a391d-70f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral","sameAs":"https://mistral.com","logo":"https://logos.yubhub.co/mistral.com.png"},"x-apply-url":"https://jobs.lever.co/mistral/bd88179e-de69-4675-8a6c-74e2547a85ac","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive cash salary and equity","x-skills-required":["Python","JavaScript","Unix-like terminal","Git","Shell basics","Package managers","Environment setup"],"x-skills-preferred":[],"datePosted":"2026-04-24T16:06:20.618Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript, Unix-like terminal, Git, Shell basics, Package managers, Environment setup"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_619d9b7e-15e"},"title":"FBS Agile Dev Team Member III - Software Development Engineer in Test (SDET)","description":"<p>We are seeking a highly skilled Software Development Engineer in Test (SDET) to join our Agile Dev Team Member III team. As an SDET, you will be responsible for designing, developing, and executing automated tests for our software applications. You will work closely with cross-functional teams to identify and prioritize testing needs, develop test plans, and execute tests to ensure high-quality software delivery.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and execute automated tests for software applications using various testing frameworks and tools</li>\n<li>Collaborate with cross-functional teams to identify and prioritize testing needs, develop test plans, and execute tests</li>\n<li>Develop and maintain test automation frameworks and scripts to ensure efficient and effective testing</li>\n<li>Identify and report defects, and work with development teams to resolve issues</li>\n<li>Participate in Agile development methodologies, including daily stand-ups, sprint planning, and retrospectives</li>\n<li>Stay up-to-date with emerging trends and technologies in software testing and development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Technology, or related field</li>\n<li>3+ years of experience in software testing and development, with a focus on automated testing</li>\n<li>Strong knowledge of testing frameworks and tools, such as Selenium, Appium, and JUnit</li>\n<li>Experience with Agile development methodologies and version control systems, such as Git</li>\n<li>Strong problem-solving and analytical skills, with attention to detail and ability to troubleshoot complex issues</li>\n<li>Excellent communication and collaboration skills, with ability to work effectively with cross-functional teams</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with cloud-based testing platforms, such as AWS or Azure</li>\n<li>Knowledge of DevOps practices and tools, such as Jenkins or Docker</li>\n<li>Familiarity with scripting languages, such as Python or Java</li>\n<li>Experience with test data management and test environment setup</li>\n<li>Certification in software testing or development, such as ISTQB or Scrum Master</li>\n</ul>\n<p>Experience Level: Mid Employment Type: Full-time Workplace Type: Hybrid Category: Engineering Industry: Technology Salary Range: Competitive salary and performance-based bonuses Salary Min: 80000 Salary Max: 120000 Salary Currency: USD Salary Period: year Required Skills: Test Automation, Automation Framework, CICD pipeline, PlayWright, Python, AI, Cloud, docker, Microservices Preferred Skills: Cloud-based testing platforms, DevOps practices and tools, Scripting languages, Test data management, Test environment setup</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_619d9b7e-15e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/us-en/about-us/who-we-are/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/dJWBdEoqnYoPVz96zuuKhm/hybrid-fbs-agile-dev-team-member-iii---software-development-engineer-in-test-(sdet)-in-pune-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive salary and performance-based bonuses","x-skills-required":["Test Automation","Automation Framework","CICD pipeline","PlayWright","Python","AI","Cloud","docker","Microservices"],"x-skills-preferred":["Cloud-based testing platforms","DevOps practices and tools","Scripting languages","Test data management","Test environment setup"],"datePosted":"2026-04-24T14:18:36.086Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"engineering","industry":"technology","skills":"Test Automation, Automation Framework, CICD pipeline, PlayWright, Python, AI, Cloud, docker, Microservices, Cloud-based testing platforms, DevOps practices and tools, Scripting languages, Test data management, Test environment setup"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_037bb819-c51"},"title":"Data Quality Specialist","description":"<p>We&#39;re seeking highly motivated Data Quality Specialists with strong analytical skills and a keen eye for detail to join our Human Data Annotation team within the Science organisation.</p>\n<p>This is a hybrid quality reviewing and tooling role: you&#39;ll spend the majority of your time reviewing and auditing code annotations against rubrics to ensure data used for training and evaluating AI models meets a high bar, and the remainder building, maintaining, and troubleshooting the internal tooling that annotators rely on day-to-day.</p>\n<p>You&#39;ll collaborate closely with the annotators, technical program manager, and engineer stakeholders, and contribute to refining the guidelines and processes that shape how our data is produced.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Generate and validate high-quality data annotations, based on guidelines and continuous feedback, for the development and evaluation of AI models</li>\n<li>Surface systemic issues, edge cases, and gaps in guidelines back to annotation operations and technical stakeholders</li>\n<li>Produce annotations yourself when needed, modeling the quality bar expected of the team</li>\n<li>Build and maintain internal tools and automation that streamline annotator workflows such as visualization dashboards, batch configuration scripts, output management utilities, and similar</li>\n<li>Troubleshoot environment, tooling, and CLI/git issues for annotators on their local machines, liaising with IT and engineering as needed</li>\n</ul>\n<p><strong>About You</strong></p>\n<ul>\n<li>A degree in computer science, engineering, or a related field. Alternatively, 2 to 5 years of professional experience in software engineering, technical support, or developing tools</li>\n<li>Hands-on experience using code agents (e.g. Mistral’s vibe) in your own development workflow, and genuine interest in how they&#39;re evolving</li>\n<li>Proficient in at least one programming language (e.g. Python, JavaScript, or similar), with enough breadth to read and reason about code across a few core languages</li>\n<li>Able to apply consistent judgment against a rubric and surface edge cases, ambiguities, or gaps in guidelines</li>\n<li>Sustained focus and accuracy on detail-oriented, high-volume review work</li>\n<li>Comfortable working in a Unix-like terminal: shell basics, package managers, environment setup, and git workflows (branches, merges, resolving conflicts)</li>\n<li>Able to troubleshoot local development environment issues (dependencies, virtual environments, paths, permissions) across common operating systems</li>\n<li>Professional proficiency in English, with strong writing and comprehension skills</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience in data annotation for AI/ML, especially LLM training (SFT, RLHF, preference data), evals/benchmarks, or agentic data</li>\n<li>Experience building an annotation team through interviews and training</li>\n<li>Experience supporting technical users or troubleshooting developer environments (internal tools support, DevRel, teaching assistant for coding courses, etc.)</li>\n<li>Fluency across multiple programming languages, or domain depth in one of: frontend, backend, DevOps, MLOps, data engineering</li>\n<li>Familiarity with rubric-based evaluation concepts, inter-annotator agreement, or quality measurement for human-labeled data</li>\n<li>Experience developing, deploying, and managing internal tooling or automation scripts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_037bb819-c51","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral","sameAs":"https://mistral.com","logo":"https://logos.yubhub.co/mistral.com.png"},"x-apply-url":"https://jobs.lever.co/mistral/bd88179e-de69-4675-8a6c-74e2547a85ac","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive cash salary and equity","x-skills-required":["Python","JavaScript","Unix-like terminal","Git","Shell basics","Package managers","Environment setup"],"x-skills-preferred":["Data annotation for AI/ML","LLM training","Evals/benchmarks","Agentic data","Frontend","Backend","DevOps","MLOps","Data engineering"],"datePosted":"2026-04-24T13:11:57.969Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript, Unix-like terminal, Git, Shell basics, Package managers, Environment setup, Data annotation for AI/ML, LLM training, Evals/benchmarks, Agentic data, Frontend, Backend, DevOps, MLOps, Data engineering"}]}