<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>6d7fadcc-6fa</externalid>
      <Title>Data Scientist Computer Vision</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a talented Data Scientist with deep learning and machine learning expertise focused on image-based data to help shape the future of agriculture. In this role, you&#39;ll join a dynamic team that supports the development of Bayer Crop Science next-generation products by applying computer vision to automate critical processes across the Plant Biotechnology organisation.</p>
<p>The primary responsibilities of this role are to:</p>
<p>Solve real agricultural problems using deep learning and AI across image and other data modalities, translating complex models into tangible business and scientific impact.</p>
<p>Design and implement end-to-end machine learning pipelines for computer vision use cases, including segmentation, classification, detection, and multi-task learning.</p>
<p>Prototype, evaluate, and iterate on cutting-edge architectures such as CNNs, Vision Transformers, foundational and large-scale vision models, ensuring state-of-the-art performance.</p>
<p>Optimize models for accuracy, robustness, and inference efficiency, including experimentation with hyperparameters, compression, and deployment-oriented optimisations.</p>
<p>Independently build scalable data pipelines for training, validation, and evaluation, including data ingestion, augmentation strategies, and active learning loops.</p>
<p>Collaborate cross-functionally with product, data, and software engineering teams to integrate models into production systems and deliver reliable, maintainable solutions.</p>
<p>Contribute to MLOps practices, including model versioning, deployment, monitoring, and retraining workflows using modern tooling and cloud-based platforms.</p>
<p>Build strong cross-functional relationships and actively engage with the broader Data Science Community to share best practices, align on standards, and co-create innovative solutions.</p>
<p>Present clear, compelling, and validated stories about experiments, results, and recommendations to peers, senior management, and internal customers to drive strategic and operational decisions.</p>
<p>We seek an incumbent who possesses the following:</p>
<p>M.S. with 2+ years of experience or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning or computer vision.</p>
<p>Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.</p>
<p>Hands-on experience with modern computer vision architectures including models such as ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, and Vision Transformers.</p>
<p>Strong background in handling large-scale datasets and creating custom datasets, for example using frameworks such as Hugging Face Datasets.</p>
<p>Solid understanding of core machine learning concepts including loss functions, regularization, optimisation, and learning rate scheduling.</p>
<p>Experience developing and deploying models using cloud-based ML platforms such as AWS SageMaker.</p>
<p>Familiarity with Unix environments, including bash, file systems, and core utilities.</p>
<p>Strong engineering practices including use of Git, Docker, CI/CD pipelines, modular codebase design, and unit testing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$109,370.40 - $164,055.60</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, Vision Transformers, Hugging Face Datasets, AWS SageMaker, Git, Docker, CI/CD pipelines, modular codebase design, unit testing</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a presence in over 100 countries.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976908666</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32b83135-974</externalid>
      <Title>Software Engineer, Data Infrastructure - Research</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Data Infrastructure - Research</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$250K – $380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Workload team is responsible for designing and running OpenAI’s LLM training and inference infrastructure that powers frontier models at massive scale. Our systems unify how researchers train and serve models, abstracting away the complexity of performance, parallelism, and execution across vast GPU/accelerator fleets. By providing this foundation, the Workload team ensures that researchers can focus on advancing model capabilities while we handle the scale, efficiency, and reliability required to bring those models to life.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We are looking for an engineer to design and implement the dataset infrastructure that powers OpenAI’s next-generation training stack. You will be responsible for building standardized dataset interfaces, scaling pipelines across thousands of GPUs, and proactively testing performance bottlenecks. In this role, you will collaborate closely with the multimodal researchers, and other infra groups to ensure datasets are unified, efficient, and easy to consume.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Design and maintain standardized dataset APIs, including for multimodal (MM) data that cannot fit in memory.</li>
</ul>
<ul>
<li>Build proactive testing and scale validation pipelines for dataset loading at GPU scale.</li>
</ul>
<ul>
<li>Collaborate with teammates to integrate datasets seamlessly into training and inference pipelines, ensuring smooth adoption and a great user experience.</li>
</ul>
<ul>
<li>Document and maintain dataset interfaces so they are discoverable, consistent, and easy for other teams to adopt.</li>
</ul>
<ul>
<li>Establish safeguards and validation systems to ensure datasets remain reproducible and unchanged once standardized.</li>
</ul>
<ul>
<li>Debug and resolve performance bottlenecks in distributed dataset loading (e.g., straggler systems slowing global training).</li>
</ul>
<ul>
<li>Provide visualization and inspection tools to surface errors, bugs, or bottlenecks in datasets.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Have strong engineering fundamentals with experience in distributed systems, data pipelines, or infrastructure.</li>
</ul>
<ul>
<li>Have experience building APIs, modular code, and scalable abstractions, while recognizing that abstractions ultimately serve the users and UX is an important part of the abstractions design.</li>
</ul>
<ul>
<li>Are comfortable debugging bottlenecks across large fleets of machines.</li>
</ul>
<ul>
<li>Take pride in building infrastructure that “just works,” and find joy in being the guardian of reliability and scale.</li>
</ul>
<ul>
<li>Are collaborative, humble, and excited to own a foundational (if not glamorous) part of the ML stack.</li>
</ul>
<p><strong>Bonus points if you:</strong></p>
<ul>
<li>Have background knowledge in data math, probability, or distributed data theory.</li>
</ul>
<ul>
<li>Have worked with GPU-scale distributed systems or dataset scaling for real-time data</li>
</ul>
<p><strong><strong>About OpenAI</strong></strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$250K – $380K • Offers Equity</Salaryrange>
      <Skills>distributed systems, data pipelines, infrastructure, APIs, modular code, scalable abstractions, data math, probability, distributed data theory, GPU-scale distributed systems, dataset scaling, collaborative, humble, excited to own a foundational part of the ML stack</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company was founded in 2015 and has since grown to become a leading player in the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/b7a2e30f-c5f6-4710-b53e-64d64bcce189</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>