{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/world-models"},"x-facet":{"type":"skill","slug":"world-models","display":"World Models","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_28b01ce3-8a3"},"title":"Member of Technical Staff - Imagine Model","description":"<p>As a Member of Technical Staff on the Imagine Model Team, you will develop cutting-edge AI experiences beyond text, with a strong focus on enabling high-fidelity understanding and generation across image and video modalities, while also incorporating audio where it enhances visual content.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Create and drive engineering agendas to advance multimodal capabilities, with emphasis on image and video generation, editing, understanding, controllable/long-horizon synthesis, agentic planning, RL training, and world simulation (including audio integration for richer video experiences).</li>\n<li>Improve data quality through annotation, filtering, augmentation, synthetic generation, captioning, and in-depth data studies, particularly for visual and audio data.</li>\n<li>Design evaluation frameworks, metrics, benchmarks, evals, and reward models tailored to image/video/audio quality and coherence.</li>\n<li>Implement efficient algorithms for state-of-the-art model performance, including real-time inference, distillation, and scalable serving for visual content.</li>\n<li>Develop scalable data collection and processing pipelines for multimodal (primarily image/video-focused) datasets.</li>\n<li>Collaborate cross-functionally to integrate AI solutions into production and rapidly iterate based on user feedback.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Track record in leading studies that significantly improve neural network capabilities and performance through better data or modeling.</li>\n<li>Experience in data-driven experiment designs, systematic analysis, and iterative model debugging.</li>\n<li>Experience developing or working with large-scale distributed machine learning systems.</li>\n<li>Ability to deliver optimal end-to-end user experiences.</li>\n<li>Hands-on contributor with initiative, excellence, strong work ethic, prioritization skills, and excellent communication.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Experience in SFT, RL, evals, human/synthetic data collection, or agentic systems.</li>\n<li>Proficiency in Python, JAX/XLA, PyTorch, Rust/C++, Spark, Ray, and related large-scale frameworks.</li>\n<li>Domain expertise in multimodal applications such as graphics engines, rendering techniques, image/video understanding and generation, world models, real-time simulation, or controllable/long-horizon visual content creation (audio/speech processing or music/audio generation experience is a plus where it supports video).</li>\n<li>Experience with agentic RL training, controllable/long-horizon generation, or multimodal agents that reason and act across modalities (especially in visual domains).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_28b01ce3-8a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5051985007","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Python","JAX/XLA","PyTorch","Rust/C++","Spark","Ray","multimodal applications","agentic systems","RL training","controllable/long-horizon generation"],"x-skills-preferred":["SFT","evals","human/synthetic data collection","graphics engines","rendering techniques","image/video understanding and generation","world models","real-time simulation"],"datePosted":"2026-04-18T15:24:12.847Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JAX/XLA, PyTorch, Rust/C++, Spark, Ray, multimodal applications, agentic systems, RL training, controllable/long-horizon generation, SFT, evals, human/synthetic data collection, graphics engines, rendering techniques, image/video understanding and generation, world models, real-time simulation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7896f519-fc9"},"title":"Research Scientist, Safety and Alignment for Humanoid Robotics","description":"<p>We&#39;re seeking a Research Scientist to join our Robotics team, whose mission is to build embodied AI responsibly to benefit people in the physical world. As a Research Scientist, you will design, implement, train, and evaluate large models and algorithms for humanoid robots. Your areas of focus will include algorithmic and model development to improve a robot agent&#39;s understanding of its own embodiment and VLA capabilities, learned policies for appropriate responses around people, and responses in atypical situations such as actuator faults. You will also work on Human Robot Interaction, write software to implement research ideas, and leverage your expertise to participate in a wide variety of research, including learning from simulation, reinforcement learning, learning from demonstrations, vision-language-action models, transformers, video generation, robot control, and more.</p>\n<p>To succeed in this role, you will need a PhD in a technical field or equivalent practical experience, knowledge of the latest in large machine learning research, and experience working with real-world robots. Expertise in using large datasets with deep neural networks to make real robots useful is also an advantage.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7896f519-fc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7576917","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,000 - $202,000 + bonus + equity + benefits","x-skills-required":["PhD in a technical field or equivalent practical experience","Knowledge of the latest in large machine learning research","Experience working with real-world robots","Research track record in one or more of the following topics: Humanoid Whole Body Control, Vision Language Action models; Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning and Reinforcement Learning, Sim2Real Transfer, Alignment Techniques"],"x-skills-preferred":["Humanoid Whole Body Control","Vision Language Action models","Motion Planning","Force Control","AI Safety","Diffusion Policies","World Models","Imitation Learning and Reinforcement Learning","Sim2Real Transfer","Alignment Techniques"],"datePosted":"2026-03-31T18:25:50.361Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, New York, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PhD in a technical field or equivalent practical experience, Knowledge of the latest in large machine learning research, Experience working with real-world robots, Research track record in one or more of the following topics: Humanoid Whole Body Control, Vision Language Action models; Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning and Reinforcement Learning, Sim2Real Transfer, Alignment Techniques, Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning and Reinforcement Learning, Sim2Real Transfer, Alignment Techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141000,"maxValue":202000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f3d5bc25-c76"},"title":"Research Scientist, Safety and Alignment for Humanoid Robotics","description":"<p>At Google DeepMind, we&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. We&#39;re looking for Research Scientists to join the Robotics team whose mission is to &#39;Build embodied AI responsibly to benefit people in the physical world.&#39;</p>\n<p>Our team is focused on ensuring safe humanoid robot actions spanning agentic reasoning, HRI scenarios, and physical safety with VLA models. As a Research Scientist, you will design, implement, train, and evaluate large models and algorithms for humanoid robots. You will make breakthroughs and unlock new humanoid safety capabilities, including algorithmic and model development to improve a robot agent&#39;s understanding of its own embodiment and VLA capabilities.</p>\n<p>You will write software to implement research ideas and iterate quickly. You will leverage your expertise to participate in a wide variety of research, including learning from simulation, reinforcement learning, learning from demonstrations, vision-language-action models, transformers, video generation, robot control, humanoid robots, and more.</p>\n<p>You will work effectively with a large collaborative team with fast-paced agendas to meet ambitious research goals. You will generate creative ideas, set up experiments, and test hypotheses. You will report and present research findings clearly and efficiently both internally and externally.</p>\n<p>To be successful as a Research Scientist at Google DeepMind, we look for PhDs in technical fields or equivalent practical experience. You should have knowledge of the latest in large machine learning research and experience working with real-world robots. Expertise with a subset of the following topics would be an advantage: Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning, and Reinforcement Learning.</p>\n<p>The US base salary range for this full-time position is between $141,000 - $202,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f3d5bc25-c76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7576917","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,000 - $202,000 + bonus + equity + benefits","x-skills-required":["PhD in a technical field","Knowledge of large machine learning research","Experience working with real-world robots","Humanoid Whole Body Control","Vision Language Action models","Motion Planning","Force Control","AI Safety","Diffusion Policies","World Models","Imitation Learning","Reinforcement Learning"],"x-skills-preferred":[],"datePosted":"2026-03-16T14:41:46.344Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, New York, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PhD in a technical field, Knowledge of large machine learning research, Experience working with real-world robots, Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning, Reinforcement Learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141000,"maxValue":202000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3386dafd-10b"},"title":"Senior Research Engineer - Interactive Avatars","description":"<p><strong>Senior Research Engineer - Interactive Avatars</strong></p>\n<p><strong>About the role</strong></p>\n<p>As a Research Engineer, you will join a team of 40+ Researchers and Engineers within the R&amp;D Department working on cutting edge challenges in the Generative AI space, with a focus on avatar-centric interactive video diffusion models. Within the team you’ll have the opportunity to work on the applied side of our research efforts and directly impact our solutions that are used worldwide by over 60,000 businesses.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Adapt diffusion models to incorporate diverse conditioning signals (e.g., audio, motion, interaction cues).</li>\n<li>Develop methods for streaming infinitely long video sequences at real-time rates.</li>\n<li>Work on the perceptual layer of interactive agents, including understanding user audio and generating appropriate contextual reactions.</li>\n<li>Improve lip-sync accuracy, motion realism, and overall visual quality in video diffusion models.</li>\n<li>Build robust evaluation frameworks and test suites to enable continuous quality tracking.</li>\n<li>Collaborate closely with our data team to define data needs and ensure high-quality datasets.</li>\n<li>Stay up to date with research in world models, interactive human/agent modeling, diffusion models, and related areas.</li>\n</ul>\n<p><strong>What we&#39;re looking for:</strong></p>\n<ul>\n<li>Comfortable owning and executing on the responsibilities listed above.</li>\n<li>Strong ML (e.g., diffusion, GANs, VAEs) and computer vision background with relevant industry experience.</li>\n<li>Hands-on experience with diffusion models (ideally avatar-centric or video-focused) and up to date with recent advances.</li>\n<li>Proficient in PyTorch and familiar with modern ML frameworks and tooling.</li>\n<li>Strong Python engineering skills, confident with git and version control, and a commitment to clean, maintainable research code.</li>\n<li>Outcome-driven, detail-oriented, and motivated to push state-of-the-art research into real product impact.</li>\n<li>Clear communicator of hypotheses, experiments, and results.</li>\n</ul>\n<p><strong>What will make you stand out:</strong></p>\n<ul>\n<li>Experience with audio-conditioned video diffusion models and deep knowledge of recent video DiT architectures.</li>\n<li>Demonstrated ability to own the full model development pipeline end to end, from data preparation to model design, training, and evaluation.</li>\n<li>A strong publication record in areas such as world models, interactive agents, or video diffusion models.</li>\n</ul>\n<p><strong>Why join us?</strong></p>\n<p>We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why,</p>\n<p><strong>Our culture</strong></p>\n<p>At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. <strong>You can find out more about these principles here.</strong></p>\n<p><strong>Serving 50,000+ customers (and 50% of the Fortune 500)</strong></p>\n<p>We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.</p>\n<p><strong>Proprietary AI technology</strong></p>\n<p>Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world-class AI researchers and engineers. Learn more about our AI Research Lab and the team behind.</p>\n<p><strong>AI Safety, Ethics and Security</strong></p>\n<p>AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence&#39;s impact on our society is still unfolding, our position is clear: <strong>People first. Always.</strong>  Learn more about our commitments to AI Ethics, Safety &amp; Security.</p>\n<p><strong>The good stuff...</strong></p>\n<ul>\n<li>Competitive compensation (salary + stock options + bonus)</li>\n<li>Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.</li>\n<li>25 days of annual leave + public holidays</li>\n<li>Great company culture with the option to join regular planning and socials at our hubs</li>\n<li>\\+ other benefits depending on your location</li>\n</ul>\n<p>You can see more about Who we are and How we work here: https://www.synthesia.io/careers</p>\n<p>LI-MD1</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3386dafd-10b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synthesia","sameAs":"https://www.synthesia.io/","logo":"https://logos.yubhub.co/synthesia.io.png"},"x-apply-url":"https://jobs.ashbyhq.com/synthesia/2a637b27-92db-4752-8c0d-343e267ea299","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive compensation (salary + stock options + bonus)","x-skills-required":["ML","computer vision","PyTorch","modern ML frameworks and tooling","Python engineering skills","git and version control"],"x-skills-preferred":["audio-conditioned video diffusion models","deep knowledge of recent video DiT architectures","publication record in areas such as world models, interactive agents, or video diffusion models"],"datePosted":"2026-03-06T18:34:52.754Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML, computer vision, PyTorch, modern ML frameworks and tooling, Python engineering skills, git and version control, audio-conditioned video diffusion models, deep knowledge of recent video DiT architectures, publication record in areas such as world models, interactive agents, or video diffusion models"}]}