{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/distributed-computing"},"x-facet":{"type":"skill","slug":"distributed-computing","display":"Distributed Computing","count":51},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8697757e-7b4"},"title":"Lead Robotics Software Engineer","description":"<p>As a Lead Robotics Software Engineer on our Tactical Recon &amp; Strike team, you&#39;ll be at the forefront of cutting-edge autonomous systems development. You&#39;ll tackle diverse challenges in autonomy, systems integration, robotics, and networking, making critical engineering decisions that directly impact mission success.</p>\n<p>Your role will be pivotal in ensuring Anduril&#39;s products work seamlessly together to achieve a variety of crucial outcomes. You&#39;ll develop innovative solutions for complex robotics problems, balance pragmatic engineering trade-offs with mission-critical requirements, and collaborate across teams to integrate software with hardware systems.</p>\n<p>Contributing to the entire product lifecycle, from concept to deployment, you&#39;ll rapidly prototype and iterate on software solutions. We&#39;re looking for someone who thrives in a fast-paced environment and isn&#39;t afraid to tackle ambiguous problems. Your &#39;Whatever It Takes&#39; mindset will be key in executing tasks efficiently, scalably, and pragmatically, always keeping the mission at the forefront of your work.</p>\n<p>This role offers the opportunity to make a significant impact on next-generation defense technology, working with state-of-the-art robotics and autonomous systems. You&#39;ll be part of a team that values innovation, quick iteration, and delivering high-quality solutions that meet real-world needs.</p>\n<p>Must be eligible to obtain and maintain an active U.S. Secret security clearance. This position will be located at our office in Atlanta, GA (relocation benefits provided.)</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Develop and maintain core robotics libraries, including frame transformations, targeting, and guidance systems, that will be utilized across all Anduril robotics platforms</li>\n</ul>\n<ul>\n<li>Lead the development and implementation of major features for our products, such as designing and building Software-in-the-Loop simulators for advanced systems like Altius</li>\n</ul>\n<ul>\n<li>Lead and mentor a group software engineers to help drive team success and to successfully hit tight project deadlines</li>\n</ul>\n<ul>\n<li>Optimize performance of existing products, primarily focused on our Altius Drone product line</li>\n</ul>\n<ul>\n<li>Collaborate closely with hardware and manufacturing teams throughout the product development lifecycle, providing timely feedback to influence and enhance final hardware designs</li>\n</ul>\n<ul>\n<li>Troubleshoot and resolve complex issues in deployed systems, ensuring optimal performance in the field</li>\n</ul>\n<ul>\n<li>Contribute to the design and implementation of multi-agent coordination systems for UAVs</li>\n</ul>\n<ul>\n<li>Participate in the full software development lifecycle, from concept and design through testing and deployment</li>\n</ul>\n<ul>\n<li>Stay current with emerging technologies and industry trends, recommending and implementing innovations to improve our products and processes</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Robotics, Computer Science, or related field</li>\n</ul>\n<ul>\n<li>7+ years of professional software development experience</li>\n</ul>\n<ul>\n<li>Experience as a lead of a small software engineering team</li>\n</ul>\n<ul>\n<li>Strong proficiency in C++ or Rust, with experience in Linux development environments</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in data structures, algorithms, concurrency, and code optimization</li>\n</ul>\n<ul>\n<li>Proven experience troubleshooting and analyzing remotely deployed software systems</li>\n</ul>\n<ul>\n<li>Hands-on experience working with and testing electrical and mechanical systems</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams, including hardware and manufacturing</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills and a &#39;Whatever It Takes&#39; mindset</li>\n</ul>\n<ul>\n<li>Excellent communication skills, both written and verbal</li>\n</ul>\n<ul>\n<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>\n</ul>\n<ul>\n<li>Willingness to relocate to Atlanta, GA</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s or Ph.D. in a relevant field (e.g., Robotics, Computer Science, Electrical Engineering)</li>\n</ul>\n<ul>\n<li>Expertise in one or more advanced robotics areas: motion planning, perception, localization, mapping, or controls</li>\n</ul>\n<ul>\n<li>Experience with performance optimization and metrics for complex robotic systems</li>\n</ul>\n<ul>\n<li>Proficiency in Python, Rust, and/or Go, in addition to C++</li>\n</ul>\n<ul>\n<li>Hands-on experience programming for embedded systems and physical devices</li>\n</ul>\n<ul>\n<li>Background in multi-agent coordination, particularly with UAVs</li>\n</ul>\n<ul>\n<li>Demonstrated ability to solve complex frame transformation problems (e.g., target localization, multi-degree-of-freedom robotic arms)</li>\n</ul>\n<ul>\n<li>Experience with real-time operating systems and distributed computing</li>\n</ul>\n<ul>\n<li>Familiarity with machine learning and AI applications in robotics</li>\n</ul>\n<ul>\n<li>Knowledge of sensor fusion techniques and implementation</li>\n</ul>\n<ul>\n<li>Understanding of aerodynamics and flight dynamics as applied to UAV systems</li>\n</ul>\n<ul>\n<li>Experience with simulation environments for robotics testing and development</li>\n</ul>\n<ul>\n<li>Track record of contributions to open-source robotics projects or relevant publications</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8697757e-7b4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5033836007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000-$252,000 USD","x-skills-required":["C++","Rust","Linux development environments","Data structures","Algorithms","Concurrency","Code optimization","Troubleshooting","Analysis","Electrical and mechanical systems"],"x-skills-preferred":["Python","Go","Embedded systems","Physical devices","Multi-agent coordination","UAVs","Frame transformation","Real-time operating systems","Distributed computing","Machine learning","AI","Sensor fusion","Aerodynamics","Flight dynamics"],"datePosted":"2026-04-18T15:58:40.280Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Rust, Linux development environments, Data structures, Algorithms, Concurrency, Code optimization, Troubleshooting, Analysis, Electrical and mechanical systems, Python, Go, Embedded systems, Physical devices, Multi-agent coordination, UAVs, Frame transformation, Real-time operating systems, Distributed computing, Machine learning, AI, Sensor fusion, Aerodynamics, Flight dynamics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":252000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdc6f0f9-900"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdc6f0f9-900","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461168002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","distributed computing","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:29.214Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Los Angeles, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, distributed computing, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a7cad02-cd5"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a7cad02-cd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494155002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:20.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Philadelphia, Pennsylvania"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2b1604a-c20"},"title":"Applied AI Engineer","description":"<p>We are seeking an Applied AI Engineer to join our team at Komodo Health. As an Applied AI Engineer, you will design and deploy end-to-end AI solutions that power real products and internal tools. You&#39;ll work at the intersection of applied research, engineering, and product development, bringing modern AI techniques into scalable production systems.</p>\n<p>You will collaborate closely with product, platform, and data teams to build AI capabilities that transform how healthcare data is explored, understood, and operationalized. Your work will involve designing, building, and deploying agent-based AI pipelines integrated into real customer-facing products, as well as building internal AI productivity tools that accelerate engineering workflows across Komodo.</p>\n<p>In this role, you will have the opportunity to work on a wide range of projects, from developing AI-powered applications to integrating AI capabilities across backend services and product interfaces. You will also contribute reusable patterns to Komodo&#39;s AI infrastructure and internal tooling ecosystem.</p>\n<p>To be successful in this role, you will need to have experience building production-grade AI systems or AI-powered applications, strong proficiency in Python, and experience working with LLMs, prompt engineering, or agent-based architectures. You will also need to be able to integrate AI capabilities across backend services and product interfaces, and have experience designing evaluation frameworks, testing strategies, or monitoring systems for AI features.</p>\n<p>If you are passionate about using AI to drive innovation and improvement in healthcare, and have the skills and experience to succeed in this role, we encourage you to apply.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Design and deploy end-to-end AI solutions that power real products and internal tools</li>\n<li>Collaborate closely with product, platform, and data teams to build AI capabilities that transform how healthcare data is explored, understood, and operationalized</li>\n<li>Develop agent-based AI pipelines integrated into real customer-facing products</li>\n<li>Build internal AI productivity tools that accelerate engineering workflows across Komodo</li>\n<li>Contribute reusable patterns to Komodo&#39;s AI infrastructure and internal tooling ecosystem</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience building production-grade AI systems or AI-powered applications</li>\n<li>Strong proficiency in Python</li>\n<li>Experience working with LLMs, prompt engineering, or agent-based architectures</li>\n<li>Ability to integrate AI capabilities across backend services and product interfaces</li>\n<li>Experience designing evaluation frameworks, testing strategies, or monitoring systems for AI features</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Healthcare data expertise</li>\n<li>Experience with distributed computing frameworks (e.g., Spark, Snowflake, Databricks) for large-scale data processing</li>\n</ul>\n<p><strong>Location</strong></p>\n<p>This role is located in San Francisco, California, and is available for remote work.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Comprehensive health, dental, and vision insurance</li>\n<li>Flexible time off and holidays</li>\n<li>401(k) with company match</li>\n<li>Disability insurance and life insurance</li>\n<li>Leaves of absence in accordance with applicable state and local laws and regulations and company policy</li>\n</ul>\n<p><strong>Equal Opportunity Employer</strong></p>\n<p>Komodo Health is an equal opportunity employer and welcomes applications from all qualified candidates. We are committed to diversity and inclusion in the workplace and strive to create a work environment that is free from discrimination and harassment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2b1604a-c20","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Komodo Health","sameAs":"https://www.komodohealth.com/","logo":"https://logos.yubhub.co/komodohealth.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/komodohealth/jobs/8512178002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$191,000 - $224,000 per year","x-skills-required":["Python","LLMs","Prompt Engineering","Agent-Based Architectures","Distributed Computing Frameworks","Spark","Snowflake","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:46.591Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Python, LLMs, Prompt Engineering, Agent-Based Architectures, Distributed Computing Frameworks, Spark, Snowflake, Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":224000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_08a5f496-732"},"title":"Robotics Software Engineer","description":"<p>As a Robotics Software Engineer on our Tactical Recon &amp; Strike team, you&#39;ll be at the forefront of cutting-edge autonomous systems development. You&#39;ll tackle diverse challenges in autonomy, systems integration, robotics, and networking, making critical engineering decisions that directly impact mission success.</p>\n<p>Your role will be pivotal in ensuring Anduril&#39;s products work seamlessly together to achieve a variety of crucial outcomes. You&#39;ll develop innovative solutions for complex robotics problems, balance pragmatic engineering trade-offs with mission-critical requirements, and collaborate across teams to integrate software with hardware systems.</p>\n<p>Contributing to the entire product lifecycle, from concept to deployment, you&#39;ll rapidly prototype and iterate on software solutions. We&#39;re looking for someone who thrives in a fast-paced environment and isn&#39;t afraid to tackle ambiguous problems. Your &#39;Whatever It Takes&#39; mindset will be key in executing tasks efficiently, scalably, and pragmatically, always keeping the mission at the forefront of your work.</p>\n<p>This role offers the opportunity to make a significant impact on next-generation defence technology, working with state-of-the-art robotics and autonomous systems. You&#39;ll be part of a team that values innovation, quick iteration, and delivering high-quality solutions that meet real-world needs.</p>\n<p>Must be eligible to obtain and maintain an active U.S. Secret security clearance. This position will be located at our office in Atlanta, GA (relocation benefits provided.)</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Develop and maintain core robotics libraries, including frame transformations, targeting, and guidance systems, that will be utilized across all Anduril robotics platforms</li>\n</ul>\n<ul>\n<li>Lead the development and implementation of major features for our products, such as designing and building Software-in-the-Loop simulators for advanced systems like Altius</li>\n</ul>\n<ul>\n<li>Optimise performance of existing products, primarily focused on our Altius Drone product line</li>\n</ul>\n<ul>\n<li>Collaborate closely with hardware and manufacturing teams throughout the product development lifecycle, providing timely feedback to influence and enhance final hardware designs</li>\n</ul>\n<ul>\n<li>Troubleshoot and resolve complex issues in deployed systems, ensuring optimal performance in the field</li>\n</ul>\n<ul>\n<li>Contribute to the design and implementation of multi-agent coordination systems for UAVs</li>\n</ul>\n<ul>\n<li>Participate in the full software development lifecycle, from concept and design through testing and deployment</li>\n</ul>\n<ul>\n<li>Stay current with emerging technologies and industry trends, recommending and implementing innovations to improve our products and processes</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Robotics, Computer Science, or related field</li>\n</ul>\n<ul>\n<li>3+ years of professional software development experience</li>\n</ul>\n<ul>\n<li>Strong proficiency in C++ or Rust, with experience in Linux development environments</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in data structures, algorithms, concurrency, and code optimisation</li>\n</ul>\n<ul>\n<li>Proven experience troubleshooting and analysing remotely deployed software systems</li>\n</ul>\n<ul>\n<li>Hands-on experience working with and testing electrical and mechanical systems</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams, including hardware and manufacturing</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills and a &#39;Whatever It Takes&#39; mindset</li>\n</ul>\n<ul>\n<li>Excellent communication skills, both written and verbal</li>\n</ul>\n<ul>\n<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>\n</ul>\n<ul>\n<li>Willingness to relocate to Atlanta, GA</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s or Ph.D. in a relevant field (e.g., Robotics, Computer Science, Electrical Engineering)</li>\n</ul>\n<ul>\n<li>Expertise in one or more advanced robotics areas: motion planning, perception, localisation, mapping, or controls</li>\n</ul>\n<ul>\n<li>Experience with performance optimisation and metrics for complex robotic systems</li>\n</ul>\n<ul>\n<li>Proficiency in Python, Rust, and/or Go, in addition to C++</li>\n</ul>\n<ul>\n<li>Hands-on experience programming for embedded systems and physical devices</li>\n</ul>\n<ul>\n<li>Background in multi-agent coordination, particularly with UAVs</li>\n</ul>\n<ul>\n<li>Demonstrated ability to solve complex frame transformation problems (e.g., target localisation, multi-degree-of-freedom robotic arms)</li>\n</ul>\n<ul>\n<li>Experience with real-time operating systems and distributed computing</li>\n</ul>\n<ul>\n<li>Familiarity with machine learning and AI applications in robotics</li>\n</ul>\n<ul>\n<li>Knowledge of sensor fusion techniques and implementation</li>\n</ul>\n<ul>\n<li>Understanding of aerodynamics and flight dynamics as applied to UAV systems</li>\n</ul>\n<ul>\n<li>Experience with simulation environments for robotics testing and development</li>\n</ul>\n<ul>\n<li>Track record of contributions to open-source robotics projects or relevant publications</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_08a5f496-732","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5078772007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$165,000-$218,000 USD","x-skills-required":["C++","Rust","Linux development environments","Data structures","Algorithms","Concurrency","Code optimisation","Troubleshooting","Analysis","Electrical and mechanical systems","Collaboration","Problem-solving","Communication"],"x-skills-preferred":["Python","Go","Embedded systems","Physical devices","Multi-agent coordination","Motion planning","Perception","Localisation","Mapping","Controls","Performance optimisation","Real-time operating systems","Distributed computing","Machine learning","AI applications","Sensor fusion","Aerodynamics","Flight dynamics","Simulation environments"],"datePosted":"2026-04-18T15:49:37.735Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Rust, Linux development environments, Data structures, Algorithms, Concurrency, Code optimisation, Troubleshooting, Analysis, Electrical and mechanical systems, Collaboration, Problem-solving, Communication, Python, Go, Embedded systems, Physical devices, Multi-agent coordination, Motion planning, Perception, Localisation, Mapping, Controls, Performance optimisation, Real-time operating systems, Distributed computing, Machine learning, AI applications, Sensor fusion, Aerodynamics, Flight dynamics, Simulation environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":218000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bac99a46-7f5"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bac99a46-7f5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461243002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:01.745Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Denver, Colorado"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_087e2e06-4fb"},"title":"Staff Machine Learning Engineer, Ads Auction (Ads Marketplace Quality)","description":"<p>We&#39;re looking for a Staff Machine Learning Engineer to join our Ads Marketplace Quality team. As a key member of the team, you will be responsible for developing and executing a vision to improve our Ads Marketplace at Reddit. You will develop a deep understanding of our marketplace dynamics and identify areas of improvement by getting to the bottom of data, design, implement and ship algorithms to production that improve our ads marketplace efficiency.</p>\n<p>In this role, you will specialize in improving and optimizing our ads auction and pricing mechanism which will have a direct impact on upleveling the utility for both our advertiser and user values. You will also have the opportunity to work on other org-wide strategic initiatives such as supply optimization and ad relevance, where you will drive and execute on Reddit’s vision to transform Reddit into an advertising platform that shows the right ads to the right users at the right time in the right context.</p>\n<p>As a Staff Machine Learning Engineer in the Ads Marketplace Quality team, you will be an industry technical leader with domain knowledge in ads marketplace dynamics, auction and pricing, you will research, formulate, and execute on our mission to build end-to-end algorithmic solutions and deliver values to all the three-sided participants to our marketplace.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead and oversee the strategy development, quarterly planning and day-to-day execution of initiatives related to ads marketplace, auction and pricing.</li>\n<li>Proactively further our understanding of marketplace dynamics and develop algorithms to improve the efficiency and effectiveness of our ads marketplace, auction and pricing.</li>\n<li>Oversee end-to-end ML workflows,from data ingestion and feature engineering to model training, evaluation, and deployment,that optimizes the ads marketplace efficiency.</li>\n<li>Be a mentor, lead both junior and senior engineers in implementing technical designs and reviews. Fostering a culture of innovation, technical excellence, and knowledge sharing across the organization.</li>\n<li>Be a cross-functional advocate for the team, collaborate with cross-functional teams (e.g., product management, data science, PMM, Sales etc.) to innovate and build products.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>8+ years of experience with industry-level product development, with at least 5+ years focused on data-driven, marketplace-optimization problem space at scale.</li>\n<li>Strong knowledge of ads marketplace optimization. Demonstrated experience architecting ads marketplace design, improving and optimizing ads auction and pricing mechanisms.</li>\n<li>Solid understanding of large-scale data processing, distributed computing, and data infrastructure (e.g., Spark, Kafka, Beam, Flink).</li>\n<li>Proficiency in machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries for feature engineering, model training, and inference.</li>\n<li>Proficiency with programming languages (Java, Python, Golang, C++, or similar) and statistical analysis.</li>\n<li>Proven technical leadership in cross-functional settings, driving architectural decisions and influencing stakeholders (product, data science, privacy, legal).</li>\n<li>Excellent communication, mentoring, and collaboration skills to align teams on a long-term vision for ads marketplace optimization.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive Healthcare Benefits</li>\n<li>401k Matching</li>\n<li>Workspace benefits for your home office</li>\n<li>Personal &amp; Professional development funds</li>\n<li>Family Planning Support</li>\n<li>Flexible Vacation (please use them!) &amp; Reddit Global Wellness Days</li>\n<li>4+ months paid Parental Leave</li>\n<li>Paid Volunteer time off</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_087e2e06-4fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7181821","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$230,000-$322,000 USD","x-skills-required":["machine learning","ads marketplace optimization","large-scale data processing","distributed computing","data infrastructure","Spark","Kafka","Beam","Flink","TensorFlow","PyTorch","feature engineering","model training","inference","programming languages","statistical analysis","technical leadership","cross-functional settings","architectural decisions","influencing stakeholders"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:11.272Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, ads marketplace optimization, large-scale data processing, distributed computing, data infrastructure, Spark, Kafka, Beam, Flink, TensorFlow, PyTorch, feature engineering, model training, inference, programming languages, statistical analysis, technical leadership, cross-functional settings, architectural decisions, influencing stakeholders","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":322000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61b49b86-6c8"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>You will report to the regional Manager/Lead.</p>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61b49b86-6c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8341313002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:54.724Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e850d882-42f"},"title":"Research Engineer, Production Model Post-Training","description":"<p>As a Research Engineer on our Post-Training team, you&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies.</p>\n<p>You&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>\n<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p>We conduct all interviews in Python, and this role may require responding to incidents on short-notice, including on weekends.</p>\n<p>Responsibilities:</p>\n<p>Implement and optimize post-training techniques at scale on frontier models</p>\n<p>Conduct research to develop and optimize post-training recipes that directly improve production model quality</p>\n<p>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</p>\n<p>Develop tools to measure and improve model performance across various dimensions</p>\n<p>Collaborate with research teams to translate emerging techniques into production-ready implementations</p>\n<p>Debug complex issues in training pipelines and model behavior</p>\n<p>Help establish best practices for reliable, reproducible model post-training</p>\n<p>You may be a good fit if you:</p>\n<p>Thrive in controlled chaos and are energized, rather than overwhelmed, when juggling multiple urgent priorities</p>\n<p>Adapt quickly to changing priorities</p>\n<p>Maintain clarity when debugging complex, time-sensitive issues</p>\n<p>Have strong software engineering skills with experience building complex ML systems</p>\n<p>Are comfortable working with large-scale distributed systems and high-performance computing</p>\n<p>Have experience with training, fine-tuning, or evaluating large language models</p>\n<p>Can balance research exploration with engineering rigor and operational reliability</p>\n<p>Are adept at analyzing and debugging model training processes</p>\n<p>Enjoy collaborating across research and engineering disciplines</p>\n<p>Can navigate ambiguity and make progress in fast-moving research environments</p>\n<p>Strong candidates may also:</p>\n<p>Have experience with LLMs</p>\n<p>Have a keen interest in AI safety and responsible deployment</p>\n<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems.</p>\n<p>However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>\n<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e850d882-42f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4613592008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$500,000 USD","x-skills-required":["Python","Deep learning frameworks","Distributed computing","ML systems","Large-scale distributed systems","High-performance computing","Training, fine-tuning, or evaluating large language models"],"x-skills-preferred":["LLMs","AI safety and responsible deployment"],"datePosted":"2026-04-18T15:43:26.573Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks, Distributed computing, ML systems, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, LLMs, AI safety and responsible deployment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a2686d2-290"},"title":"Staff Analytics Engineer","description":"<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>\n<p>Our Data Science and Analytics team seeks to empower R&amp;D to make data-backed decisions that accelerate innovation and improve product performance. You will work closely within our team and across Product &amp; Engineering to design and maintain a robust analytics data layer that enables trusted reporting on R&amp;D metrics.</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Design and implement a formal analytics data layer using AWS Glue, Presto, and LookML</li>\n<li>Collaborate within the Data Science &amp; Analytics team and across Product &amp; Engineering to define, document, and maintain alignment on metric definition and data lineage</li>\n<li>Develop and maintain automated data reconciliation and quality checks to proactively identify and resolve discrepancies, ensuring accuracy and consistency of critical reports and dashboards</li>\n<li>Lead investigations into complex data anomalies, conduct root cause analysis, and communicate findings and solutions effectively to both technical and non-technical audiences</li>\n<li>Mentor and guide members of the data science and analytics team, establishing and enforcing best practices around data modeling, testing, documentation, and code review</li>\n</ul>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a2686d2-290","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7551660","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$155,520 - $194,400 (Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.)\n$164,640 - $205,800 (New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area))\n$182,960 - $228,700 (San Francisco Bay area, California)","x-skills-required":["AWS Glue","Presto","LookML","SQL","data modeling","data pipelines","data reconciliation","data quality checks"],"x-skills-preferred":["Python","distributed computing technologies","Hive","Spark","dashboarding tools","Looker","Tableau"],"datePosted":"2026-04-18T15:43:20.940Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS Glue, Presto, LookML, SQL, data modeling, data pipelines, data reconciliation, data quality checks, Python, distributed computing technologies, Hive, Spark, dashboarding tools, Looker, Tableau","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155520,"maxValue":228700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b0c17b4f-3f4"},"title":"Research Engineer, Production Model Post-Training","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p>About the role</p>\n<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>\n<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Implement and optimize post-training techniques at scale on frontier models</li>\n<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>\n<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>\n<li>Develop tools to measure and improve model performance across various dimensions</li>\n<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>\n<li>Debug complex issues in training pipelines and model behavior</li>\n<li>Help establish best practices for reliable, reproducible model post-training</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>\n<li>Adapt quickly to changing priorities</li>\n<li>Maintain clarity when debugging complex, time-sensitive issues</li>\n<li>Have strong software engineering skills with experience building complex ML systems</li>\n<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>\n<li>Have experience with training, fine-tuning, or evaluating large language models</li>\n<li>Can balance research exploration with engineering rigor and operational reliability</li>\n<li>Are adept at analyzing and debugging model training processes</li>\n<li>Enjoy collaborating across research and engineering disciplines</li>\n<li>Can navigate ambiguity and make progress in fast-moving research environments</li>\n</ul>\n<p>Strong candidates may also:</p>\n<ul>\n<li>Have experience with LLMs</li>\n<li>Have a keen interest in AI safety and responsible deployment</li>\n</ul>\n<p>Logistics</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different</p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b0c17b4f-3f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5112018008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Deep learning frameworks","Distributed computing","Large-scale distributed systems","High-performance computing","Training, fine-tuning, or evaluating large language models","Software engineering","Complex ML systems"],"x-skills-preferred":["LLMs","AI safety and responsible deployment"],"datePosted":"2026-04-18T15:43:07.939Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, CH"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Software engineering, Complex ML systems, LLMs, AI safety and responsible deployment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9d5fcc78-b2b"},"title":"Resident Solutions Architect - Public Sector","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9d5fcc78-b2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8423296002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Python","Scala","AWS","Azure","GCP","distributed computing","Spark runtime internals"],"datePosted":"2026-04-18T15:42:27.646Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Central - United States; Northeast - United States; Southeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Python, Scala, AWS, Azure, GCP, distributed computing, Spark runtime internals","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8131cff5-1a9"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8131cff5-1a9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8341311002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:15.014Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_50d65da2-2e4"},"title":"Resident Solutions Architect - Healthcare & Life Sciences","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. Your responsibilities will include providing data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>\n<p>You will work on a variety of impactful customer technical projects, including designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will also work with engagement managers to scope professional services work with input from the customer, guide strategic customers as they implement transformational big data projects, and consult on architecture and design.</p>\n<p>To be successful in this role, you will need to have 6+ years of experience in data engineering, data platforms, and analytics, be comfortable writing code in either Python or Scala, and have working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. You should also have deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals.</p>\n<p>The pay range for this role is $180,656-$248,360 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_50d65da2-2e4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494143002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms","analytics","Python","Scala","Cloud ecosystems","Apache Spark","distributed computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:13.802Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago, Illinois"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms, analytics, Python, Scala, Cloud ecosystems, Apache Spark, distributed computing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b647b7da-f8f"},"title":"Resident Solutions Architect - Public Sector","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>US Top Secret Clearance Required this position</li>\n</ul>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b647b7da-f8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494107002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:08.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Virginia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8d1ca2f5-7be"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8d1ca2f5-7be","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461220002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:04.881Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago, Illinois"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6860353a-782"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6860353a-782","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461241002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","big data","AI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:53.366Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, big data, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb3ba652-daa"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb3ba652-daa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461163002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:52.535Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3827f936-fc2"},"title":"Resident Solutions Architect - Financial Services","description":"<p>Job Title: Resident Solutions Architect - Financial Services</p>\n<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n</ul>\n<ul>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.</p>\n<p>To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>\n<p>For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>\n<p>We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3827f936-fc2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461326002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","Cloud ecosystems","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:59.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, New York"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, Cloud ecosystems, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9223ca6d-d9e"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9223ca6d-d9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461193002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","Python","Scala","CI/CD","MLOps","distributed computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:33.675Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, Python, Scala, CI/CD, MLOps, distributed computing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8d65cea1-fd1"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8d65cea1-fd1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461219002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:01.213Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_507bea17-ad7"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_507bea17-ad7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461251002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:39:19.614Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6f5337c-c2f"},"title":"Research Engineer (Scaling Multimodal Data)","description":"<p>We&#39;re looking for a research engineer to help improve our in-house world models through better multimodal data. This role is about figuring out what data actually moves model quality , then building the datasets, pipelines, and experiments to prove it.</p>\n<p>The best generative models aren’t just a product of model architecture and compute, they are a product of the training data. The model output reflects someone’s obsession over what goes into the data, how it’s processed, and what gets thrown away. We’re looking for the person who does the obsessing and builds the tools to act on it at scale.</p>\n<p>This isn’t a role where someone hands you a dataset and asks you to clean it. You will decide what data we need, figure out where to get it, build the processing and curation systems, and close the loop with model training to make sure it actually works.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Discover, evaluate, and acquire training data</li>\n<li>Build data processing and curation systems</li>\n<li>Look at the actual data constantly</li>\n<li>Close the data → model → evaluation loop</li>\n<li>Deploy ML models for data enrichment</li>\n<li>Make systematic, documented decisions</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Strong software engineering fundamentals</li>\n<li>Deep experience with image and video data at scale</li>\n<li>Experience with distributed computing</li>\n<li>Experience using ML models as components</li>\n<li>A research-oriented approach to data decisions</li>\n<li>Familiarity with the model training lifecycle</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Familiarity with columnar and large-scale data storage formats and libraries</li>\n<li>Track record of independently discovering and integrating new data sources into a training pipeline</li>\n<li>Direct experience closing the data → model quality loop</li>\n<li>Strong visual intuition for data quality and diversity</li>\n</ul>\n<p><strong>What This Isn’t:</strong></p>\n<ul>\n<li>Not infrastructure</li>\n<li>Not pure research</li>\n<li>Not a role where you wait for instructions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6f5337c-c2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"World Labs","sameAs":"https://world-labs.com/","logo":"https://logos.yubhub.co/world-labs.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/worldlabs/jobs/4164503009","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering fundamentals","image and video data at scale","distributed computing","ML models as components","research-oriented approach to data decisions","model training lifecycle"],"x-skills-preferred":["columnar and large-scale data storage formats and libraries","independently discovering and integrating new data sources","closing the data → model quality loop","visual intuition for data quality and diversity"],"datePosted":"2026-04-17T13:09:48.326Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering fundamentals, image and video data at scale, distributed computing, ML models as components, research-oriented approach to data decisions, model training lifecycle, columnar and large-scale data storage formats and libraries, independently discovering and integrating new data sources, closing the data → model quality loop, visual intuition for data quality and diversity"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c01f9dc6-b17"},"title":"Data Scientist - Staff or Senior (United Kingdom)","description":"<p>In this role, you will build predictive models and apply scientific computing, statistical, and physics-based methods to find places with evidence of ore-forming processes and predict locations of ore-grade mineralization in 2D and 3D.</p>\n<p>You will help build a worldwide dataset for our exploration program, with careful attention to identifying and quantifying uncertainty in the data and predictions.</p>\n<p>You will create models and develop software to accelerate discovery of critical battery metals.</p>\n<p>You will join an outstanding team of data scientists and engineers and work closely with (*applicant&#39;s) world-renowned geoscientists to incorporate our best understanding of the chemical and physical processes that create ore deposits.</p>\n<p>Working with your geoscience colleagues, you will create 2D and 3D geologic predictions, identify exploration targets, design field programs to collect data, and use that data to reduce uncertainty in our predictions and guide the next phase of field work.</p>\n<p>Ultimately, your role is to help KoBold make valuable discoveries by building data tools to solve scientific problems.</p>\n<p>As one of the early members of this team, you will help build these tools from the ground up.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help develop KoBold&#39;s proprietary software exploration tools.</li>\n<li>Find and curate geophysical, geochemical, geologic, and geographic data and integrate it into KoBold&#39;s proprietary data system.</li>\n<li>Build models to make statistically valid predictions about the locations of compositional anomalies within the Earth&#39;s crust.</li>\n<li>Create effective visualizations for evaluating model performance and enabling rapid interaction with the underlying data and key features.</li>\n<li>Develop and apply data processing, statistical, and physics-based techniques to geoscientific data , from computer vision to geophysical inversions , and use the results to guide our targeting efforts and inform our acquisition and exploration decisions.</li>\n<li>Present to and collaborate with our external partners and stakeholders.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Technical skills, including extensive experience with Python&#39;s data science packages and general software engineering practices.</li>\n<li>Collaborative software development (git), and familiarity with software engineering best practices like unit test / integration test suites, and CICD pipelines.</li>\n<li>Cloud computing resources.</li>\n<li>Building predictive models, applying them to different problems, and evaluating and interpreting the results.</li>\n<li>Data from a variety of physical systems.</li>\n<li>Geospatial analyses and visualizations.</li>\n</ul>\n<p>Technical knowledge:</p>\n<ul>\n<li>Broad skills in and knowledge of applied statistics and Bayesian inference.</li>\n<li>Substantial understanding of machine learning algorithms.</li>\n</ul>\n<p>Training and work experience:</p>\n<ul>\n<li>An advanced degree in the physical sciences, engineering, computer science, or mathematics.</li>\n<li>A minimum work experience of 4 years post PhD or 8 years post MS, ideally as a data scientist or data engineer.</li>\n<li>Experience leading technical teams to apply novel scientific approaches to core business problems.</li>\n</ul>\n<p>Work practices and motivation:</p>\n<ul>\n<li>Ability to take ownership and responsibility of large projects.</li>\n<li>Ability to explain technical problems to and collaborate on solutions with domain experts.</li>\n<li>Communicates well on a collaborative, cross-functional team.</li>\n<li>Excitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on a range of responsibilities.</li>\n<li>Ability to independently prioritize multiple tasks effectively.</li>\n<li>Intellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.</li>\n<li>Enjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.</li>\n<li>Keen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.</li>\n<li>A valid passport and willingness to travel to observe our work at Mingomba or at an exploration site around the world.</li>\n</ul>\n<p>Preferred skills include creating machine learning models on geospatial data, geostatistics, image processing or computer vision, and distributed computing applications for machine learning and other computations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c01f9dc6-b17","directApply":true,"hiringOrganization":{"@type":"Organization","name":"KoBold Metals","sameAs":"https://www.koboldmetals.com/","logo":"https://logos.yubhub.co/koboldmetals.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/koboldmetals/jobs/4677631005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,00 - $240,000 (USD) plus equity and benefits","x-skills-required":["Python's data science packages","General software engineering practices","Collaborative software development (git)","Software engineering best practices","Cloud computing resources","Building predictive models","Applying models to different problems","Evaluating and interpreting results","Data from a variety of physical systems","Geospatial analyses and visualizations","Applied statistics and Bayesian inference","Machine learning algorithms"],"x-skills-preferred":["Creating machine learning models on geospatial data","Geostatistics","Image processing or computer vision","Distributed computing applications for machine learning and other computations"],"datePosted":"2026-04-17T12:40:11.334Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python's data science packages, General software engineering practices, Collaborative software development (git), Software engineering best practices, Cloud computing resources, Building predictive models, Applying models to different problems, Evaluating and interpreting results, Data from a variety of physical systems, Geospatial analyses and visualizations, Applied statistics and Bayesian inference, Machine learning algorithms, Creating machine learning models on geospatial data, Geostatistics, Image processing or computer vision, Distributed computing applications for machine learning and other computations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":14000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccc99db2-dd4"},"title":"Data Scientist - Staff or Senior (Australia)","description":"<p>We are hiring a Data Scientist to help accelerate our mission. In this role, you will build predictive models and apply scientific computing, statistical, and physics-based methods to find places where there is evidence of ore-forming processes at work and to predict the locations of ore-grade mineralization in 2D and 3D. You will help build a worldwide dataset that underlies our exploration program, with careful attention to identifying and quantifying uncertainty in the data and in our predictions. You will create models and develop software to accelerate discovery of critical battery metals.</p>\n<p>You will join an outstanding team of data scientists and engineers and will work closely with KoBold&#39;s world-renowned geoscientists to incorporate our best understanding of the chemical and physical processes that create ore deposits. Working with your geoscience colleagues, you will create 2D and 3D geologic predictions, identify exploration targets, design field programs to collect data, and use that data to reduce the uncertainty in our predictions and guide the next phase of field work.</p>\n<p>Ultimately, your role is to help KoBold make valuable discoveries by building data tools to solve scientific problems. As one of the early members of this team, you will help build these tools from the ground up.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help develop KoBold&#39;s proprietary software exploration tools.</li>\n<li>Find and curate geophysical, geochemical, geologic, and geographic data and integrate it into KoBold&#39;s proprietary data system.</li>\n<li>Build models to make statistically valid predictions about the locations of compositional anomalies within the Earth&#39;s crust.</li>\n<li>Create effective visualizations for evaluating model performance and enabling rapid interaction with the underlying data and key features.</li>\n<li>Develop and apply data processing, statistical, and physics-based techniques to geoscientific data , from computer vision to geophysical inversions , and use the results to guide our targeting efforts and inform our acquisition and exploration decisions.</li>\n<li>Present to and collaborate with our external partners and stakeholders.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccc99db2-dd4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"KoBold Metals","sameAs":"https://www.koboldmetals.com/","logo":"https://logos.yubhub.co/koboldmetals.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/koboldmetals/jobs/4677639005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000 - $240,000 (USD) plus equity and benefits","x-skills-required":["Python's data science packages","General software engineering practices","Collaborative software development (git)","Cloud computing resources","Building predictive models","Applying machine learning algorithms","Data from a variety of physical systems","Geospatial analyses and visualizations"],"x-skills-preferred":["Creating machine learning models on geospatial data","Geostatistics","Image processing or computer vision","Distributed computing applications for machine learning and other computations"],"datePosted":"2026-04-17T12:39:51.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python's data science packages, General software engineering practices, Collaborative software development (git), Cloud computing resources, Building predictive models, Applying machine learning algorithms, Data from a variety of physical systems, Geospatial analyses and visualizations, Creating machine learning models on geospatial data, Geostatistics, Image processing or computer vision, Distributed computing applications for machine learning and other computations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_deac6c59-0a4"},"title":"Senior R&D Engineer – AI for Simulation Tools","description":"<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>\n<p>You are a passionate engineer who thrives on solving complex technical challenges and is eager to shape the next generation of mechanical simulation tools. With a strong foundation in software development and AI, you are excited by the opportunity to blend agentic AI advancements with engineering simulation software.</p>\n<p>Driving and implementing the integration of conversational AI capabilities into advanced mechanical simulation software tools.\nDesigning, developing, maintaining, testing, and documenting software modules,including APIs, backend, and frontend components.\nCollaborating closely with multidisciplinary teams to leverage shared technology components and ensure seamless integration.\nWriting clean, maintainable, and efficient code, adhering to industry best practices and coding standards.\nInvestigating and resolving technical issues reported by QA or product support, ensuring robust and reliable software performance.\nParticipating in bug verification, release testing, and beta support activities to guarantee product quality.\nStaying abreast of the latest agentic AI advancements, applying them to enhance existing systems or develop novel solutions.</p>\n<p>Accelerate the integration of AI into Synopsys&#39; new mechanical simulation tools, transforming user experience and functionality.\nDeliver high-impact, high-visibility AI solutions for a pioneering product line in the simulation and engineering domain.\nShape innovative tools that enable engineers and researchers to solve complex mechanical modeling problems more efficiently.\nContribute to the advancement of modeling through intelligent automation and conversational interfaces.\nDrive Synopsys&#39; reputation as a leader in AI-powered engineering solutions, empowering customers across industries.</p>\n<p>MS in Engineering, Computer Science, Natural Science, or a related field.\nAcademic, research, or industry experience in software development.\nProficiency in Python\nExperience with LLMs or generative AI techniques and agentic AI tools.\nExperience with additional programming languages such as Typescript, Go, or C++.\nFamiliarity with distributed computing technologies (micro-service architectures, RPC frameworks, REST, WebSocket APIs).\nExperience with containerization tools (e.g., Docker), CI/CD pipelines (e.g., GitHub), and web frontend frameworks (e.g., Angular, React) is a plus.\nBackground in Computer-Aided Engineering (CAE), Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), or CAD modeling is highly valued.</p>\n<p>Passionate about understanding and solving complex problems with innovative technical solutions.\nSelf-driven with a continuous learning mindset and enthusiasm for emerging technologies.\nInquisitive, rigorous, and detail-oriented.\nExcellent communicator, fluent in English.\nAdaptable, able to work independently and collaboratively within distributed teams.</p>\n<p>You will join a collaborative, agile team dedicated to developing next-generation tools for pre- and post-processing of mechanical simulations. The team values excellence and continuous improvement, working together to achieve ambitious goals and deliver innovative solutions that redefine simulation software.</p>\n<p>We offer a comprehensive range of benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>\n<p>At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_deac6c59-0a4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/zurich/senior-r-and-d-engineer-ai-for-simulation-tools-m-f-d/44408/92956733552","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","LLMs or generative AI techniques","Agentic AI tools","Typescript","Go","C++","Distributed computing technologies","Containerization tools","CI/CD pipelines","Web frontend frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:22:11.324Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zurich"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, LLMs or generative AI techniques, Agentic AI tools, Typescript, Go, C++, Distributed computing technologies, Containerization tools, CI/CD pipelines, Web frontend frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_38debdc4-b87"},"title":"GPU R&D Engineer (CUDA programming)","description":"<p>You are a passionate technology leader with deep expertise in GPU-accelerated computing and algorithm design. With over a decade of experience in software engineering, you thrive in environments that challenge you to innovate and push boundaries.</p>\n<p>As a GPU R&amp;D Engineer at Synopsys, you will be responsible for optimizing and enhancing existing GPU implementations for cutting-edge ILT (Inverse Lithography Technology) software. You will also design, develop, and deploy new GPU-accelerated algorithms for handling large-scale geometric data in mask synthesis tools.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Optimizing and enhancing existing GPU implementations for cutting-edge ILT software</li>\n<li>Designing, developing, and deploying new GPU-accelerated algorithms for handling large-scale geometric data in mask synthesis tools</li>\n<li>Collaborating with software, hardware, and QA teams to ensure seamless integration of advanced GPU features into Synopsys solutions</li>\n<li>Leading benchmarking and performance testing efforts to maximize throughput and efficiency of GPU algorithms</li>\n<li>Conducting research and staying current on GPU technology advancements, integrating the latest trends into Synopsys EDA products</li>\n<li>Interfacing with customers and hardware vendors to deliver optimal solutions and support rapid chip manufacturing cycles</li>\n</ul>\n<p>This role requires a strong foundation in algorithms and data structures, with proven experience optimizing for performance. You should also have exceptional troubleshooting skills and the ability to resolve complex integration challenges.</p>\n<p>In return, you will have the opportunity to make a tangible impact in the world of electronic design automation and lead initiatives that shape the next generation of semiconductor technology.</p>\n<p>The team you will be a part of is a dynamic, diverse group of engineers focused on advancing mask synthesis and lithography solutions within Synopsys. The team is renowned for its innovative spirit, technical excellence, and collaborative approach, working closely with customers and hardware partners to deliver industry-leading EDA tools.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_38debdc4-b87","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/bengaluru/gpu-r-and-d-engineer-cuda-programming/44408/91681543296","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Advanced knowledge of CUDA or similar GPU computing technologies","Proficiency in C/C++, Python, and distributed computing environments","Strong foundation in algorithms and data structures, with proven experience optimizing for performance","Exceptional troubleshooting skills and ability to resolve complex integration challenges","Experience with computational geometry algorithms, including Beziers, NURBS, and B-splines"],"x-skills-preferred":["Background in designing algorithms for Optical Proximity Correction and Inverse Lithography Technology","Experience with large-scale data handling and distributed systems"],"datePosted":"2026-04-05T13:22:03.873Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Advanced knowledge of CUDA or similar GPU computing technologies, Proficiency in C/C++, Python, and distributed computing environments, Strong foundation in algorithms and data structures, with proven experience optimizing for performance, Exceptional troubleshooting skills and ability to resolve complex integration challenges, Experience with computational geometry algorithms, including Beziers, NURBS, and B-splines, Background in designing algorithms for Optical Proximity Correction and Inverse Lithography Technology, Experience with large-scale data handling and distributed systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ce88828f-470"},"title":"Solutions Architect, AI and ML","description":"<p>We are building the world&#39;s leading AI company and are looking for an experienced Cloud Solution Architect to help assist customers with adoption of GPU hardware and Software, as well as building and deploying Machine Learning (ML), Deep Learning (DL), data analytics solutions on various Cloud Computing Platforms.</p>\n<p>As part of the Solutions Architecture team, we work with some of the most exciting computing hardware and software technologies including the latest breakthroughs in machine learning and data science. A Solutions Architect is the first line of technical expertise between NVIDIA and our customers so you will engage directly with developers, researchers, and data scientists with some of NVIDIA&#39;s most strategic technology customers as well as work directly with business and engineering teams on product strategy.</p>\n<p><strong>What you will be doing:</strong></p>\n<ul>\n<li>Working with Cloud Service Providers to develop and demonstrate solutions based on NVIDIA&#39;s ML/DL and data science software and hardware technologies</li>\n</ul>\n<ul>\n<li>Build and deploy AI/ML solutions at scale using NVIDIA&#39;s AI software on cloud-based GPU platforms.</li>\n</ul>\n<ul>\n<li>Build custom PoCs for solution that address customer&#39;s critical business needs applying NVIDIA hardware and software technology</li>\n</ul>\n<ul>\n<li>Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions</li>\n</ul>\n<ul>\n<li>Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.</li>\n</ul>\n<ul>\n<li>Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues</li>\n</ul>\n<p><strong>What we need to see:</strong></p>\n<ul>\n<li>3+ years of Solutions Engineering (or similar Sales Engineering roles) or equivalent experience</li>\n</ul>\n<ul>\n<li>3+ years of work-related experience in Deep Learning and Machine Learning, including deep learning frameworks TensorFlow or PyTorch, GPU, and CUDA experience extremely helpful.</li>\n</ul>\n<ul>\n<li>BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.</li>\n</ul>\n<ul>\n<li>Established track record of deploying solutions in cloud computing environments including AWS, GCP, or Azure</li>\n</ul>\n<ul>\n<li>Knowledge of DevOps/ML Ops technologies such as Docker/containers, Kubernetes, data center deployments</li>\n</ul>\n<ul>\n<li>Ability to use at least one scripting language (i.e., Python)</li>\n</ul>\n<ul>\n<li>Good programming and debugging skills</li>\n</ul>\n<ul>\n<li>Ability to communicate your ideas/code clearly through documents, presentation etc.</li>\n</ul>\n<p><strong>Ways to stand out from the crowd:</strong></p>\n<ul>\n<li>AWS, GCP or Azure Professional Solution Architect Certification.</li>\n</ul>\n<ul>\n<li>Hands-on experience with NVIDIA GPUs and SDKs (e.g. CUDA, RAPIDS, Triton etc.)</li>\n</ul>\n<ul>\n<li>System-level experience specifically GPU-based systems</li>\n</ul>\n<ul>\n<li>Experience with Deep Learning at scale</li>\n</ul>\n<ul>\n<li>Familiarity with parallel programming and distributed computing platforms</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ce88828f-470","directApply":true,"hiringOrganization":{"@type":"Organization","name":"NVIDIA","sameAs":"https://nvidia.wd5.myworkdayjobs.com","logo":"https://logos.yubhub.co/nvidia.com.png"},"x-apply-url":"https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-WA-Redmond/Solutions-Architect--AI-and-ML_JR2000691","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Solutions Engineering","Deep Learning and Machine Learning","TensorFlow or PyTorch","GPU and CUDA experience","BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields","DevOps/ML Ops technologies","Docker/containers, Kubernetes, data center deployments","Scripting language (i.e., Python)","Good programming and debugging skills","Ability to communicate your ideas/code clearly through documents, presentation etc."],"x-skills-preferred":["AWS, GCP or Azure Professional Solution Architect Certification","Hands-on experience with NVIDIA GPUs and SDKs (e.g. CUDA, RAPIDS, Triton etc.)","System-level experience specifically GPU-based systems","Experience with Deep Learning at scale","Familiarity with parallel programming and distributed computing platforms"],"datePosted":"2026-03-09T20:46:16.733Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond, Santa Clara, Seattle"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Solutions Engineering, Deep Learning and Machine Learning, TensorFlow or PyTorch, GPU and CUDA experience, BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields, DevOps/ML Ops technologies, Docker/containers, Kubernetes, data center deployments, Scripting language (i.e., Python), Good programming and debugging skills, Ability to communicate your ideas/code clearly through documents, presentation etc., AWS, GCP or Azure Professional Solution Architect Certification, Hands-on experience with NVIDIA GPUs and SDKs (e.g. CUDA, RAPIDS, Triton etc.), System-level experience specifically GPU-based systems, Experience with Deep Learning at scale, Familiarity with parallel programming and distributed computing platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b55675c9-0db"},"title":"Head of Engineering (Platform)","description":"<p><strong>Head of Engineering (Platform)</strong></p>\n<p>Fuse Energy is seeking a Head of Engineering (Platform) to lead the development of our core backend systems and platform infrastructure. As a key member of our team, you will own the architecture and scalability of the platform, ensuring we build robust, high-performance systems that enable rapid product iteration and exceptional customer experiences.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the backend platform architecture, infrastructure, and foundational services</li>\n<li>Drive the evolution of our platform to support scale, performance, and reliability</li>\n<li>Build a real-time digital twin of renewable generation and customer demand</li>\n<li>Design and manage high-volume data pipelines for energy consumption and system telemetry</li>\n<li>Lead the development of integration layers and messaging interfaces with third-party services</li>\n<li>Establish engineering best practices for observability, CI/CD, testing, and scalability</li>\n<li>Partner closely with product and backend teams to support rapid development cycles</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Proven track record as a senior software engineer or tech lead, ideally with platform/backend focus</li>\n<li>5+ years experience in software engineering, with 2+ years in a leadership role</li>\n<li>Experience building and operating production-grade systems at scale</li>\n<li>Strong understanding of system design, distributed computing, and cloud infrastructure</li>\n<li>Clear and proactive communication, with the ability to align cross-functional teams</li>\n<li>Hands-on approach to solving problems and making strategic decisions</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Experience with Infrastructure as Code (e.g., AWS CDK, Terraform)</li>\n<li>Experience with event-driven architecture, messaging queues, or stream processing</li>\n<li>Familiarity with building internal platforms or developer tooling</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and an equity sign-on bonus</li>\n<li>Biannual bonus scheme</li>\n<li>Fully expensed tech to match your needs</li>\n<li>Paid annual leave</li>\n<li>Breakfast and dinner allowance for office based employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b55675c9-0db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fuse Energy","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/dSZh2emP6XmnvYfQnTTL5q/hybrid-head-of-engineering-(platform)-in-london-at-fuse-energy","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend platform architecture","infrastructure as code","event-driven architecture","messaging queues","stream processing","system design","distributed computing","cloud infrastructure"],"x-skills-preferred":["AWS CDK","Terraform","CI/CD","testing","scalability"],"datePosted":"2026-03-09T16:55:17.477Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend platform architecture, infrastructure as code, event-driven architecture, messaging queues, stream processing, system design, distributed computing, cloud infrastructure, AWS CDK, Terraform, CI/CD, testing, scalability"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a97094d0-e90"},"title":"Research Engineer, Production Model Post-Training","description":"<p><strong>About the role</strong></p>\n<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>\n<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>\n<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Implement and optimize post-training techniques at scale on frontier models</li>\n<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>\n<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>\n<li>Develop tools to measure and improve model performance across various dimensions</li>\n<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>\n<li>Debug complex issues in training pipelines and model behavior</li>\n<li>Help establish best practices for reliable, reproducible model post-training</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>\n<li>Adapt quickly to changing priorities</li>\n<li>Maintain clarity when debugging complex, time-sensitive issues</li>\n<li>Have strong software engineering skills with experience building complex ML systems</li>\n<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>\n<li>Have experience with training, fine-tuning, or evaluating large language models</li>\n<li>Can balance research exploration with engineering rigor and operational reliability</li>\n<li>Are adept at analysing and debugging model training processes</li>\n<li>Enjoy collaborating across research and engineering disciplines</li>\n<li>Can navigate ambiguity and make progress in fast-moving research environments</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience with LLMs</li>\n<li>Have a keen interest in AI safety and responsible deployment</li>\n</ul>\n<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems. However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary:</p>\n<p>$350,000 - $500,000USD</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a97094d0-e90","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4613592008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Deep learning frameworks","Distributed computing","Large-scale distributed systems","High-performance computing","Training, fine-tuning, or evaluating large language models"],"x-skills-preferred":["Experience with LLMs","AI safety and responsible deployment"],"datePosted":"2026-03-08T13:47:28.524Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Experience with LLMs, AI safety and responsible deployment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ca30dbae-0f6"},"title":"Research Engineer, Production Model Post-Training","description":"<p><strong>About the role</strong></p>\n<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>\n<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>\n<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Implement and optimize post-training techniques at scale on frontier models</li>\n</ul>\n<ul>\n<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>\n</ul>\n<ul>\n<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>\n</ul>\n<ul>\n<li>Develop tools to measure and improve model performance across various dimensions</li>\n</ul>\n<ul>\n<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>\n</ul>\n<ul>\n<li>Debug complex issues in training pipelines and model behavior</li>\n</ul>\n<ul>\n<li>Help establish best practices for reliable, reproducible model post-training</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>\n</ul>\n<ul>\n<li>Adapt quickly to changing priorities</li>\n</ul>\n<ul>\n<li>Maintain clarity when debugging complex, time-sensitive issues</li>\n</ul>\n<ul>\n<li>Have strong software engineering skills with experience building complex ML systems</li>\n</ul>\n<ul>\n<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>\n</ul>\n<ul>\n<li>Have experience with training, fine-tuning, or evaluating large language models</li>\n</ul>\n<ul>\n<li>Can balance research exploration with engineering rigor and operational reliability</li>\n</ul>\n<ul>\n<li>Are adept at analyzing and debugging model training processes</li>\n</ul>\n<ul>\n<li>Enjoy collaborating across research and engineering disciplines</li>\n</ul>\n<ul>\n<li>Can navigate ambiguity and make progress in fast-moving research environments</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience with LLMs</li>\n</ul>\n<ul>\n<li>Have a keen interest in AI safety and responsible deployment</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ca30dbae-0f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5112018008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Deep learning frameworks","Distributed computing","Large language models","ML systems","High-performance computing"],"x-skills-preferred":["LLMs","AI safety","Responsible deployment"],"datePosted":"2026-03-08T13:46:28.215Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks, Distributed computing, Large language models, ML systems, High-performance computing, LLMs, AI safety, Responsible deployment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67dcf42f-2dc"},"title":"Engineering Manager ChatGPT Infra","description":"<p><strong>Engineering Manager ChatGPT Infra</strong></p>\n<p><strong>Location</strong></p>\n<p>London, UK</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong><strong>About the Team:</strong></strong></p>\n<p>The ChatGPT Infrastructure team is responsible for the platform that powers ChatGPT, one of the fastest-growing consumer products in history. We build, scale, and operate the infrastructure that enables rapid experimentation, reliable deployment, and global delivery of AI-powered experiences. As we expand our global footprint, we’re investing in establishing a leadership presence in London to help shape our growing office and drive collaboration across OpenAI’s international teams.</p>\n<p><strong><strong>About the Role:</strong></strong></p>\n<p>We’re looking for an experienced Engineering Manager to lead the ChatGPT Infra team from our London office. In this dual role, you’ll be both a technical leader and the site lead for our London engineering hub. You’ll be responsible for building and mentoring a world-class infra team, helping to scale ChatGPT infrastructure, and fostering a strong, inclusive engineering culture at our growing international site.</p>\n<p>You will:</p>\n<ul>\n<li>Lead a team of infrastructure engineers focused on availability, scalability, and performance for ChatGPT.</li>\n</ul>\n<ul>\n<li>Collaborate closely with product and research teams to deliver a seamless and robust experience to millions of users.</li>\n</ul>\n<ul>\n<li>Define and drive technical strategy for key components such as deployment pipelines, service mesh, observability, and CI/CD systems.</li>\n</ul>\n<ul>\n<li>Partner with recruiting to grow the London engineering team and represent OpenAI in the local tech community.</li>\n</ul>\n<ul>\n<li>Serve as a cultural ambassador and people manager, supporting cross-functional collaboration and site operations.</li>\n</ul>\n<ul>\n<li>Operate with a high degree of autonomy and ownership, with support from global leaders and peers.</li>\n</ul>\n<p><strong><strong>Qualifications:</strong></strong></p>\n<ul>\n<li>7+ years of hands-on engineering experience, ideally in high-scale systems, distributed computing, or developer platforms.</li>\n</ul>\n<ul>\n<li>Demonstrated success in leading cross-functional projects and collaborating across product, infra, and research orgs.</li>\n</ul>\n<ul>\n<li>Passion for building strong, inclusive teams and mentoring engineers of all experience levels.</li>\n</ul>\n<ul>\n<li>Experience operating production services in cloud environments (e.g., AWS, GCP, Azure).</li>\n</ul>\n<ul>\n<li>Comfortable wearing multiple hats — from deep technical discussions to team planning and office leadership.</li>\n</ul>\n<ul>\n<li>Based in or willing to relocate to London.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67dcf42f-2dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/5a4ba7cb-4ba2-41d3-8e02-840617a0f571","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["high-scale systems","distributed computing","developer platforms","cloud environments","AWS","GCP","Azure","deployment pipelines","service mesh","observability","CI/CD systems"],"x-skills-preferred":["leadership","team management","cross-functional collaboration","site operations"],"datePosted":"2026-03-06T18:20:48.510Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"high-scale systems, distributed computing, developer platforms, cloud environments, AWS, GCP, Azure, deployment pipelines, service mesh, observability, CI/CD systems, leadership, team management, cross-functional collaboration, site operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a5dfb84a-c37"},"title":"Member of Technical Staff, Evaluations Engineering","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Evaluations Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We&#39;re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for someone who will contribute to the development of AI models that are powering our innovative products. You will actively contribute to the development of AI models that are powering our innovative products. You will wear multiple hats and work on engineering, research, and everything in between. Your contributions will span model architecture, data curation, training and inference infrastructures, evaluation protocols, alignment and reinforcement learning from human feedback (RLHF), and many other exciting topics at the cutting edge of AI.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a5dfb84a-c37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-evaluations-engineering-mai-superintelligence-team-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed Computing"],"x-skills-preferred":["Experience with AI","Experience with machine learning"],"datePosted":"2026-03-06T07:33:24.706Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with AI, Experience with machine learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f8953efe-b98"},"title":"Member of Technical Staff, Evaluations Engineering","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Evaluations Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a highly skilled and experienced engineer to join our Evaluations Engineering team. As a Member of Technical Staff, Evaluations Engineer, you will be responsible for developing and tuning the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures. You will also be responsible for benchmarking GB200 and AMD MIxxx GPU clusters, gathering data and insights to develop the pretraining compute roadmap, and caring deeply about conversational AI and its deployment.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Software Engineering IC6 – The typical base pay range for this role across the U.S. is USD $163,000 – $296,400 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f8953efe-b98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-evaluations-engineering-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed Computing"],"x-skills-preferred":["Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures","Experience with benchmarking GPU clusters"],"datePosted":"2026-03-06T07:32:38.526Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with benchmarking GPU clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_046c8733-208"},"title":"Member of Technical Staff, Evaluations Engineering","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Evaluations Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a highly skilled and experienced Evaluations Engineer to join our team. As an Evaluations Engineer, you will be responsible for developing and tuning the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures. You will also be responsible for benchmarking GB200 and AMD MIxxx GPU clusters, gathering data and insights to develop the pretraining compute roadmap, and caring deeply about conversational AI and its deployment.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_046c8733-208","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-evaluations-engineering-mai-superintelligence-team/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed Computing"],"x-skills-preferred":["Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures","Experience with conversational AI and its deployment"],"datePosted":"2026-03-06T07:31:52.983Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with conversational AI and its deployment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_42b795e2-7cb"},"title":"Member of Technical Staff - Software Engineer (AI infra)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Software Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a highly skilled software engineer to join our team and contribute to the development of AI models that are powering our innovative products. You will actively contribute to the development of AI models that are powering our innovative products. You will wear multiple hats and work on engineering, research, and everything in between. Your contributions will span model architecture, data curation, training and inference infrastructures, evaluation protocols, alignment and reinforcement learning from human feedback (RLHF), and many other exciting topics at the cutting edge of AI.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work on cutting-edge AI projects.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_42b795e2-7cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-engineer-ai-infra-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed computing"],"x-skills-preferred":["Leadership","Project management","Data analysis"],"datePosted":"2026-03-06T07:29:32.292Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed computing, Leadership, Project management, Data analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9d194d98-aa3"},"title":"Member of Technical Staff, Pre-Training Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Pre-Training Infrastructure, to help build the next wave of capabilities for our personalized AI assistant, Copilot. We’re seeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a highly skilled and experienced engineer to join our team as a Member of Technical Staff, Pre-Training Infrastructure. The successful candidate will be responsible for designing, implementing, testing, and optimizing distributed training infrastructure in Python and C++ for large-scale GPU clusters. They will also profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.</li>\n<li>Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in distributed computing and large-scale systems.</li>\n<li>Experience with GPU programming (CUDA, NCCL) and frameworks such as PyTorch.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proven ability to profile, benchmark, and optimize performance-critical systems.</li>\n<li>Experience in leading technical projects and supporting architectural decisions with data.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work on cutting-edge AI projects.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9d194d98-aa3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-pre-training-infrastructure-mai-superintelligence-team-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed computing","GPU programming","PyTorch","C++","Python"],"x-skills-preferred":["machine learning","natural language processing","computer vision"],"datePosted":"2026-03-06T07:29:28.007Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed computing, GPU programming, PyTorch, C++, Python, machine learning, natural language processing, computer vision"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a4ac455a-c22"},"title":"Senior Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Applied Scientist at their Redmond office. This role sits at the heart of strategic decision-making, transforming complex data into actionable insights for a company that&#39;s revolutionising the technology industry. You&#39;ll work directly with leadership to shape the company&#39;s direction in the development of large-scale, Azure-based intelligence platforms.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Applied Scientist, you will lead the development of machine learning solutions leveraging SOTA technologies in GenAI to build predictive models for generating recommendations, detecting anomalies, generating automated insights with reasoning, and ensuring the platform delivers accurate, actionable intelligence at scale. You will drive experimentation and validation of models, mentor junior scientists, and contribute to model governance and Responsible AI practices.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Lead the design and implementation of machine learning models for recommendations, anomaly detection, and actionable insights.</li>\n<li>Drive experimentation and validation of models.</li>\n<li>Mentor junior scientists and contribute to model governance and Responsible AI practices.</li>\n<li>Partner with engineering and BI teams to operationalize insights into dashboards and alerting systems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years related experience (e.g., statistics predictive analytics, research) OR Master&#39;s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research).</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in programming for data science (e.g. using Python or R for data analysis and modeling) and experience with data querying languages (e.g. SQL).</li>\n<li>Big Data &amp; Distributed Computing: Hands-on experience with large-scale data processing using tools like Apache Spark or Azure Databricks for training and inference workflows.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and collaboration skills.</li>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a4ac455a-c22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-25/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["machine learning","statistics","data science","programming","data querying languages"],"x-skills-preferred":["big data","distributed computing","Apache Spark","Azure Databricks"],"datePosted":"2026-03-06T07:29:17.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, data science, programming, data querying languages, big data, distributed computing, Apache Spark, Azure Databricks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86dc3bca-de2"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their MAI Superintelligence Team in New York. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI research and development. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and machine learning markets.</p>\n<p><strong>About the Role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI’s research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas.</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance.</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues.</li>\n<li>Build tools and establish processes to enhance the team’s collective productivity.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86dc3bca-de2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["generative AI","distributed computing","Python","C","C++","C#","Java","JavaScript"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:29:16.210Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"generative AI, distributed computing, Python, C, C++, C#, Java, JavaScript, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_675d41e9-5f9"},"title":"Member of Technical Staff, Reinforcement Learning Systems","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>\n<p><strong>About the Role</strong></p>\n<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>A high degree of craftsmanship and pay close attention to details.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_675d41e9-5f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed Computing"],"x-skills-preferred":["Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures","Experience with GPU clusters"],"datePosted":"2026-03-06T07:29:05.671Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with GPU clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_24556bdc-0a0"},"title":"Senior Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Applied Scientist at their Redmond office. This role sits at the heart of strategic decision-making, transforming complex data into high-quality and rich actionable insights for Microsoft Advertising stakeholders. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and machine learning markets.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Applied Scientist, you will lead the development of machine learning solutions leveraging SOTA technologies in GenAI to build predictive models for generating recommendations, detecting anomalies, generating automated insights with reasoning, and ensuring the platform delivers accurate, actionable intelligence at scale. You will drive experimentation and validation of models, mentor junior scientists, and contribute to model governance and Responsible AI practices.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Lead the design and implementation of machine learning models for recommendations, anomaly detection, and actionable insights.</li>\n<li>Drive experimentation and validation of models.</li>\n<li>Mentor junior scientists and contribute to model governance and Responsible AI practices.</li>\n<li>Partner with engineering and BI teams to operationalize insights into dashboards and alerting systems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in programming for data science (e.g. using Python or R for data analysis and modeling) and experience with data querying languages (e.g. SQL).</li>\n<li>Big Data &amp; Distributed Computing: Hands-on experience with large-scale data processing using tools like Apache Spark or Azure Databricks for training and inference workflows.</li>\n<li>Advanced Analytics: Skilled in time-series analysis and anomaly detection techniques (e.g., ARIMA, isolation forests) applied to business contexts for actionable insights.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and collaboration skills.</li>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_24556bdc-0a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-4/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["machine learning","artificial intelligence","data science","programming","data querying languages"],"x-skills-preferred":["big data","distributed computing","advanced analytics","time-series analysis","anomaly detection"],"datePosted":"2026-03-06T07:29:01.364Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, artificial intelligence, data science, programming, data querying languages, big data, distributed computing, advanced analytics, time-series analysis, anomaly detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a40437fb-92e"},"title":"Member of Technical Staff, Reinforcement Learning Systems","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>\n<p><strong>About the Role</strong></p>\n<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>A high degree of craftsmanship and pay close attention to details.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a40437fb-92e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed Computing"],"x-skills-preferred":["Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures","Experience with large-scale reinforcement learning systems"],"datePosted":"2026-03-06T07:28:47.269Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with large-scale reinforcement learning systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_441fa43d-100"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their team in Redmond. This role will involve working alongside researchers and engineers to implement frontier AI research ideas and introduce new systems, tools, and techniques to improve model inference performance.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff, LLM Inference, you will be responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. This will involve working on a variety of tasks, including building tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues. You will also be responsible for building tools and establishing processes to enhance the team&#39;s collective productivity.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues</li>\n<li>Build tools and establish processes to enhance the team&#39;s collective productivity</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI</li>\n<li>Experience with distributed computing</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Comprehensive benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_441fa43d-100","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","generative AI","distributed computing","Python ecosystem"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:28:37.837Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, generative AI, distributed computing, Python ecosystem, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_025813fe-4e7"},"title":"Member of Technical Staff, Pre-Training Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Pre-Training Infrastructure, to help build the next wave of capabilities for our personalized AI assistant, Copilot. We’re seeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a highly skilled and experienced engineer to join our team as a Member of Technical Staff, Pre-Training Infrastructure. The successful candidate will be responsible for designing, implementing, testing, and optimizing distributed training infrastructure in Python and C++ for large-scale GPU clusters. They will also profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.</li>\n<li>Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in distributed computing and large-scale systems.</li>\n<li>Experience with GPU programming (CUDA, NCCL) and frameworks such as PyTorch.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proven ability to profile, benchmark, and optimize performance-critical systems.</li>\n<li>Experience in leading technical projects and supporting architectural decisions with data.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work on cutting-edge AI projects.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_025813fe-4e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-pre-training-infrastructure-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed computing","GPU programming","PyTorch","C++","Python"],"x-skills-preferred":["performance optimization","leadership","data analysis"],"datePosted":"2026-03-06T07:28:26.443Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed computing, GPU programming, PyTorch, C++, Python, performance optimization, leadership, data analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b0dff67a-5b5"},"title":"Member of Technical Staff, Reinforcement Learning Systems","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>\n<p><strong>About the Role</strong></p>\n<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>\n<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>\n<li>Gather data and insights to develop the pretraining compute roadmap.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>A high degree of craftsmanship and pay close attention to details.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b0dff67a-5b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Generative AI","Distributed computing"],"x-skills-preferred":["Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures","Experience with GPU clusters"],"datePosted":"2026-03-06T07:28:16.942Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with GPU clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e37327fd-d8f"},"title":"Member of Technical Staff, Pre-Training Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Pre-Training Infrastructure, to help build the next wave of capabilities for our personalized AI assistant, Copilot. We’re seeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a highly skilled and experienced engineer to join our team as a Member of Technical Staff, Pre-Training Infrastructure. The successful candidate will be responsible for designing, implementing, testing, and optimizing distributed training infrastructure in Python and C++ for large-scale GPU clusters. They will also profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.</li>\n<li>Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in distributed computing and large-scale systems.</li>\n<li>Experience with GPU programming (CUDA, NCCL) and frameworks such as PyTorch.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proven ability to profile, benchmark, and optimize performance-critical systems.</li>\n<li>Experience in leading technical projects and supporting architectural decisions with data.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work on cutting-edge AI projects.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e37327fd-d8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-pre-training-infrastructure-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed computing","GPU programming","PyTorch","C++","Python"],"x-skills-preferred":["performance optimization","leadership","data analysis"],"datePosted":"2026-03-06T07:27:59.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed computing, GPU programming, PyTorch, C++, Python, performance optimization, leadership, data analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ed62c63-fc2"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their MAI Superintelligence Team. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI research and development. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and machine learning markets.</p>\n<p><strong>About the Role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI’s research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas.</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance.</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues.</li>\n<li>Build tools and establish processes to enhance the team’s collective productivity.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end.</li>\n<li>Value clear communication, improving team processes, and being a supportive team player.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ed62c63-fc2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["generative AI","distributed computing","Python","C","C++","C#","Java","JavaScript"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:27:53.969Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"generative AI, distributed computing, Python, C, C++, C#, Java, JavaScript, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_98a7b04f-0dc"},"title":"Senior Data Scientist, Fortnite Ecosystem","description":"<p>We are seeking a Senior Data Scientist to join our Data &amp; Analytics team. As a Senior Data Scientist, you will be responsible for advancing Fortnite and cultivating an ecosystem where games of all kinds can thrive. You will partner closely with the Fortnite Ecosystem Growth team to drive strategy and evaluate initiatives across the Developer Economy, IP Development, Creator Relations, and Genre Campaigns.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Partner with design and product management leaders to break down ambiguous problems, identify key business opportunities, and leverage data to establish essential metrics and deliver insights that will drive and shape the strategic direction.</li>\n<li>Transform raw data into data models, production metrics, scaled reporting, and insights to improve user experience and engagement.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>7+ years of industry or relevant experience, with a good understanding of live service video games</li>\n<li>Strong product intuition and ability to shape strategy for a complex ecosystem</li>\n<li>Demonstrated background in influencing products by applying data and measurement to drive alignment</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_98a7b04f-0dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5730982004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data science","analytics","product strategy","live service video games"],"x-skills-preferred":["data visualization","experimental design","causal inference methods","SQL","distributed computing","code version control","orchestration"],"datePosted":"2026-03-05T21:08:07.120Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Montreal, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data science, analytics, product strategy, live service video games, data visualization, experimental design, causal inference methods, SQL, distributed computing, code version control, orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_84915e17-e79"},"title":"Senior Technical Artist - Capture","description":"<p>We are looking for a Senior Technical Artist to lead our Capture Technical Art team, supporting Motion Capture, 3D Scanning, and performance acquisition pipelines. You do not need deep prior Capture-specific experience but you bring pipeline fluency and an understanding of how data flows across tools, sites, and teams. You will guide a small group of Technical Artists, providing both direct leadership and hands-on technical contributions.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Manage the Capture Technical Art team</li>\n<li>Develop and extend production tools and codebases</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Expertise in Python development for production systems, APIs, and automation</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_84915e17-e79","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Senior-Technical-Artist-Capture/212200","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,600 - $167,300 CAD (British Columbia) or $138,400 - $211,700 USD (California)","x-skills-required":["Python development","GitLab CI/CD pipeline design","Distributed computing","Batch processing APIs","Compute farm orchestration"],"x-skills-preferred":["Motion capture or 3D scanning workflows","Perforce","PySide","Confluence","Jira","Coda"],"datePosted":"2026-01-01T16:58:24.343Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Los Angeles - Chatsworth, California, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python development, GitLab CI/CD pipeline design, Distributed computing, Batch processing APIs, Compute farm orchestration, Motion capture or 3D scanning workflows, Perforce, PySide, Confluence, Jira, Coda","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119600,"maxValue":211700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a7597fd-d7a"},"title":"Senior Data Engineer","description":"<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a7597fd-d7a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Hands on experience working with Vector/Graph;Neo4j","3+ years of experience in data engineering, working on AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes","Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)","Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."],"datePosted":"2026-01-01T15:49:59.491Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5eb908e-6f9"},"title":"Senior Data Engineer","description":"<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5eb908e-6f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Vector/Graph;Neo4j","data engineering","AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Terraform","containerization tools like Dockers","container orchestration platforms like Kubernetes","orchestration tools like Airflow or Prefect","distributed computing framework like Spark or Dask","data transformation tool like Data Build Tool (DBT)"],"datePosted":"2025-12-26T10:53:07.867Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Vector/Graph;Neo4j, data engineering, AI/ML-driven data architectures, Python, SQL, Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)"}]}