{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/python"},"x-facet":{"type":"skill","slug":"python","display":"Python","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e0788ce-490"},"title":"AI Product Manager","description":"<p>Job Title: AI Product Manager</p>\n<p>We are seeking an experienced AI Product Manager to lead AI-focused product enablement and adoption initiatives across our global organisation. The ideal candidate will have a mix of technical experience in AI/ML/LLM products and APIs, process automation, and proficiency in survey design and data analytics.</p>\n<p>Key Responsibilities:</p>\n<p>Awareness of AI Ecosystem and Trends: Use Claude Code / Codex / Cursor / etc. as well as skills in these tools. Follow developments among major players in the commercial and open-source AI/LLM space, including Anthropic, OpenAI, Google, xAI, and others.</p>\n<p>Communication: Communicate clearly and effectively with internal and external vendors and stakeholders about requirements, feature requests, scope, expectations, priorities, product releases, and timelines. Generate clear, consistent, and accurate documentation about products, tradeoffs, decisions, and value proposition of various efforts.</p>\n<p>AI Product Enablement and Adoption: Conduct consulting-style engagements with technical and non-technical teams to onboard them with AI tools and products available within Millennium. Collaborate with users to understand pain points, needs, feature requests, and requirements to design and execute product development with usability and scalability in mind.</p>\n<p>Feedback Collection and Data Analytics: Analyze usage and feedback data with accuracy and quality using SQL and Python to identify trends, gaps, and opportunities for product improvements. Use insights from your analysis to refine product roadmaps and enablement initiatives to maximise impact across the firm.</p>\n<p>Qualifications:</p>\n<p>Education: Bachelor&#39;s degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Must have technical knowledge and experience.</p>\n<p>Experience: 5-7+ years of experience in a combination of AI/ML/LLM engineering, startup, technical education, consulting, data science, product analytics, or product management roles. Some technical / engineering / building experience is a hard requirement, e.g. you must be able to use the CLI and analyse data.</p>\n<p>Technical Proficiency: Strong knowledge of AI/ML concepts, especially Large Language Models (LLMs). Understanding of their capabilities and failings as a technology. Proficiency in SQL and Python for data analysis and visualisation. Proficiency in survey software and analytics tools (e.g. Qualtrics). Strong familiarity with effective SDLC and CI/CD principles.</p>\n<p>Product Management: Proven ability to use clear judgment and organisational skills to manage complex, cross-functional products involving technical and non-technical teams. Intellectual curiosity and strong aptitude for prioritisation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e0788ce-490","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955422287","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Claude Code","Codex","Cursor","SQL","Python","Qualtrics","SDLC","CI/CD"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:37.294Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore, Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Claude Code, Codex, Cursor, SQL, Python, Qualtrics, SDLC, CI/CD"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b33cbd91-bc9"},"title":"Systematic Production Support Engineer","description":"<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>\n<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>\n<li>Implementing automated systems and processes focused on trading and operations</li>\n<li>Streamlining development and deployment processes</li>\n</ul>\n<p>Technical qualifications include:</p>\n<ul>\n<li>5+ years of development experience in Python</li>\n<li>Experience working in a Linux/Unix environment</li>\n<li>Experience working with PostgreSQL or other relational databases</li>\n</ul>\n<p>Preferred skills and experience include:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>\n<li>Experience operating and monitoring low-latency trading environments</li>\n<li>Familiarity with quantitative finance and electronic trading concepts</li>\n<li>Familiarity with financial data</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>\n<li>Experience with Apache/Confluent Kafka</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>\n<li>Experience with containerization and orchestration technologies</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>\n<li>Contributions to open-source projects</li>\n</ul>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b33cbd91-bc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954716155","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Linux/Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models","low-latency trading environments","quantitative finance","electronic trading concepts","financial data","equities","futures","FX","distributed systems","backend development","C/C++","Java","Scala","Go","C#","Apache/Confluent Kafka","SDLC pipelines","containerization","orchestration technologies","AWS","GCP","Azure"],"x-skills-preferred":["Understanding of NLP, supervised/non-supervised learning, and Generative AI models","Experience operating and monitoring low-latency trading environments","Familiarity with quantitative finance and electronic trading concepts","Familiarity with financial data","Broad understanding of equities, futures, FX, or other financial instruments","Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#","Experience with Apache/Confluent Kafka","Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)","Experience with containerization and orchestration technologies","Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure","Contributions to open-source projects"],"datePosted":"2026-04-18T22:14:36.583Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b93800dd-3d2"},"title":"Production Engineering Support Manager – Liquidity Provision Technology","description":"<p>We are seeking a Production Engineering Support Manager to join our team. As a Production Engineering Support Manager, you will provide leadership and guidance to coach, motivate and lead team members to their optimum performance levels and career development. You will solve technical trading-related issues, independently where possible or leveraging teammates as necessary. You will escalate to application and/or infrastructure subject matter experts (internally or at vendors) when appropriate. You will manage communications to the trading staff and internal stakeholders, primarily our execution services team regarding issue/resolution.</p>\n<p>Collaborate with other technical support engineers who may need assistance working on an issue; utilize his/her area of expertise to help others in order to quickly facilitate solutions for the customer. Build and foster working relationships with trading groups with a focus on execution services team. Work with global counterparts to provide seamless 24/7 global coverage.</p>\n<p>Trading Infrastructure / Platform Status Communications – disseminate messages to the appropriate trading staff regarding trading infrastructure / platform issues, exchange updates, etc. Uplift environment management tools to reduce risk and streamline efficiency of support team. Assist with automating processes to achieve efficiency and streamlined trade support.</p>\n<p>Document and create new knowledge base to provide the most effective solutions to trading issues. Deployment of, support of, and monitoring of the firm’s internal trading systems. Coordinate with vendors, internal application owners, infrastructure owners, and tech support to ensure trading platforms are correctly installed, configured, and tested.</p>\n<p>Liaise with development and infrastructure teams, prioritize tool enhancements, and coordinate and participate in software/new version releases. Liaise with Dev and Infrastructure teams to coordinate and participate in software/new version releases.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b93800dd-3d2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953129734","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Linux","shell scripting","python","SQL","financial technology","FIX protocols","AI technologies","version control systems","SDLC processes","columnar database","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:31.477Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Linux, shell scripting, python, SQL, financial technology, FIX protocols, AI technologies, version control systems, SDLC processes, columnar database, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bd2d1b2-84f"},"title":"Senior Machine Learning Researcher","description":"<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>\n<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>\n<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>\n<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>\n</ul>\n<ul>\n<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>\n</ul>\n<ul>\n<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>\n</ul>\n<ul>\n<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>\n</ul>\n<ul>\n<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>\n</ul>\n<p>Required Skills &amp; Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>\n</ul>\n<ul>\n<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>\n</ul>\n<ul>\n<li>Hands-on experience with data processing, experimentation, and exploration.</li>\n</ul>\n<ul>\n<li>Strong programming skills in Python.</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and attention to detail.</li>\n</ul>\n<ul>\n<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Ability to work independently and collaboratively.</li>\n</ul>\n<p>Additional Preferred Skills &amp; Qualifications:</p>\n<ul>\n<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>\n</ul>\n<ul>\n<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>\n</ul>\n<ul>\n<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>\n</ul>\n<ul>\n<li>Previous experience working in an Agile environment.</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bd2d1b2-84f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT - Artificial Intelligence","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954012324","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Python","Machine Learning","NLP","GenAI","Agentic technologies","Data processing","Experimentation","Exploration","Cloud platforms (AWS, Azure, GCP)","Problem-solving skills","Communication skills"],"x-skills-preferred":["PyTorch","TensorFlow","MLOps practices and tools (SageMaker, MLflow, Airflow)","Agile environment"],"datePosted":"2026-04-18T22:14:27.951Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d676e040-d22"},"title":"Operations Specialist (Corporate Actions)","description":"<p>Operations Specialist (Corporate Actions)</p>\n<p>The Operations SpecialistMBED will be responsible for monitoring, coordinating, booking and communicating all Corporate Actions to the relevant trading teams. This role will also involve daily responsibilities revolving around desk support, same day trade matching, full reconciliations and PM/broker/counterparty queries as well as ad-hoc PnL issues relating to corporate action bookings and entitlements.</p>\n<p>Principal Responsibilities</p>\n<p>Actively managing the Corporate Action process for the EMEA markets across all event types Interacting with trading groups across multiple strategies accommodating their specific requirements, as well as several Prime Brokers (PBs) and potential vendors Processing cross-border deals and understanding mechanics on such deals including funding queries, FX risk and PnL exposure Able to actively monitor and process paydate process including bookings made to internal lines, pricing appropriately and working with PnL / valuation teams Intraday trade monitoring - working with PMs/Execution desk/IT to ensure correct capture Mitigating risk by ensuring timely completion of daily trade / cash / position / market value reconciliations Working closely with operations team based in India and acting as a focal point for Corporate Action related queries Educate and assist team members on all asset servicing issues Act as a primary contact for all EMEA related asset servicing queries and issues Assist the development of proprietary systems Continuous improvement of current systems working alongside regional teams and business analysts / technology</p>\n<p>Qualifications/Skills Required</p>\n<p>5+ years relevant experience in Asset Servicing/Corporate Action team and/or equity operations/ support Strong knowledge of Corporate Actions processing from front to back Must have a practical working knowledge of equity swap mechanics in a Buy vs Sell side environment Excellent excel skills and familiarity with SQL / Python Ability to implement and maintain new systems, procedures, and controls Strong and confident communication and interpersonal skills with clear ability to be able to face off to the trading desk and PMs Must be able to work under pressure and meet strict deadlines Omgeo / CTM / CTC / Traiana experience desired Detail oriented; Demonstrates thoroughness and strong ownership of work Must be able to work independently using well-honed analytical skills and abstract reasoning Good team player with a strong willingness to participate and help others Able to prioritize in a fast moving, high pressure, constantly changing environment; Good sense of urgency Willingness to undertake new challenges and opportunities should they present themselves</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d676e040-d22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Core Operations","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955300494","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Corporate Actions processing","Equity swap mechanics","Excel","SQL","Python","Omgeo","CTM","CTC","Traiana"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:27.064Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Finance","skills":"Corporate Actions processing, Equity swap mechanics, Excel, SQL, Python, Omgeo, CTM, CTC, Traiana"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b034e28c-a0c"},"title":"Quantitative Developer (C++) -  Central Liquidity Strategies","description":"<p>We are seeking a Quantitative Developer to join our team who will design, architect, and implement low-latency C++ systems that are robust, resilient, and accurate. Our team is part of the firm&#39;s central trading teams, focusing on creating a low-latency framework for algorithmic trading.</p>\n<p>The successful candidate will be directly involved in a critical path for high-volume trading with a core focus on technical and economic performance. They will work closely with quantitative research to optimize the firm&#39;s overall execution performance.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building out the C++ low-latency framework for algorithmic trading</li>\n<li>Developing execution algorithms, order management systems, strategy containers, market data handlers, and trading interfaces</li>\n<li>Enhancing the platform&#39;s efficiency using network and systems programming</li>\n<li>Creating systems, interfaces, and tools for historical market data and trading simulations</li>\n<li>Assisting in building and maintaining automated tests, performance benchmark framework, and other tools</li>\n</ul>\n<p>The ideal candidate will have 5+ years of professional experience in a front-office, financial services environment as a senior contributor, with a strong background in data structures, algorithms, and object-oriented programming in C++. They should be proficient with new features of C++17/C++20/C++23, multithreading, and asynchronous environments. A degree in computer science or a related field is required.</p>\n<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b034e28c-a0c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Trading Solutions","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954374057","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["C++","data structures","algorithms","object-oriented programming","multithreading","asynchronous environments","Linux system internals","networking","low-latency and real-time system design and implementation"],"x-skills-preferred":["Python","quantitative research","data-oriented processing"],"datePosted":"2026-04-18T22:14:23.253Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, data structures, algorithms, object-oriented programming, multithreading, asynchronous environments, Linux system internals, networking, low-latency and real-time system design and implementation, Python, quantitative research, data-oriented processing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86156963-6d9"},"title":"Commodities Finance Manager","description":"<p>We are seeking a highly skilled Commodities Finance Manager to join our Fund Accounting team. The ideal candidate will possess deep expertise in commodities markets, fund accounting principles, and operational processes within a hedge fund environment.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Oversee the accounting and reconciliation of commodities-related transactions, including futures, options, swaps, and physical commodities.</li>\n<li>Ensure timely and accurate preparation of monthly, and quarterly NAV calculations for commodities entities.</li>\n<li>Collaborate with the broader accounting team to ensure compliance with internal policies and external regulatory requirements.</li>\n<li>Work closely with trading desks, operations, and counterparties to resolve discrepancies and ensure accurate reporting.</li>\n<li>Provide insights into market trends and their impact on portfolio valuation and risk metrics.</li>\n<li>Identify and implement process improvements to enhance the efficiency and accuracy of commodities accounting workflows.</li>\n<li>Leverage technology and automation tools to streamline reporting and reconciliation processes.</li>\n<li>Ensure adherence to relevant regulatory frameworks, including GAAP, IFRS, and other applicable standards.</li>\n<li>Prepare audit documentation and liaise with external auditors as needed.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>Education: Bachelor’s degree in Accounting, Finance, Economics, or a related field.</li>\n<li>Experience: 10+ years of experience in fund accounting, with a focus on commodities markets.</li>\n<li>Technical Skills: Proficiency in fund accounting systems (e.g., Geneva).</li>\n<li>Advanced Excel skills; familiarity with data visualization tools and programming languages (e.g., Python, SQL) is a plus.</li>\n<li>Strong understanding of commodities markets, including derivatives and physical assets.</li>\n<li>Familiarity with regulatory requirements impacting hedge funds and commodities trading.</li>\n</ul>\n<p>Why Join Millennium Management?</p>\n<p>Millennium Management is a premier global hedge fund with a reputation for excellence and innovation. As a Commodities Manager in Fund Accounting, you will have the opportunity to work alongside some of the brightest minds in the industry, contribute to the success of a world-class investment platform, and advance your career in a dynamic and rewarding environment.</p>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86156963-6d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fund Accounting","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953514852","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["fund accounting","commodities markets","GAAP","IFRS","Geneva","Excel","Python","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:21.927Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"fund accounting, commodities markets, GAAP, IFRS, Geneva, Excel, Python, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d96db681-39c"},"title":"Operations Control Specialist","description":"<p>We are building a specialized team within Operations focused on designing and implementing controls and analytics for trade surveillance and trade processing.</p>\n<p>The Operations Control Specialist will combine technical skills and business knowledge to build control metrics, dashboards, and automated checks on surveillance and trade data.</p>\n<p>This role offers exposure to products across multiple asset classes, trading platforms, exchange symbology, and global markets.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, implement, and automate reconciliations and health checks across large trade and surveillance datasets (e.g., OMS and upstream/downstream systems).</li>\n</ul>\n<ul>\n<li>Build monitoring dashboards and control indicators (metrics/KPIs) to validate that trades are correctly generated, routed, and reported.</li>\n</ul>\n<ul>\n<li>Develop and maintain data quality checks (completeness, accuracy, timeliness) and automated exception reporting for trade and alert data.</li>\n</ul>\n<ul>\n<li>Partner with Technology on system migrations and application redesign, defining control requirements and validating outcomes through testing and analytics.</li>\n</ul>\n<ul>\n<li>Support data governance efforts by identifying data issues, documenting data lineage, and contributing to standards for critical trade and surveillance data.</li>\n</ul>\n<ul>\n<li>Design and automate reporting using data analytics, automation, and AI-enabled tools to reduce manual processes and improve transparency.</li>\n</ul>\n<ul>\n<li>Collaborate with Operations, Compliance, and Middle Office stakeholders to understand workflows, refine control logic, and address issues identified by analytics.</li>\n</ul>\n<p>Qualifications &amp; Skills:</p>\n<ul>\n<li>5+ years of experience in the financial industry, ideally in Operations, Middle Office, Risk, or Surveillance/Compliance analytics.</li>\n</ul>\n<ul>\n<li>Strong SQL skills (e.g., SQL, PL/SQL, T‑SQL) and experience working with large, complex datasets.</li>\n</ul>\n<ul>\n<li>Programming experience in Python (or a similar language) for data analysis, automation, and scripting.</li>\n</ul>\n<ul>\n<li>Familiarity with institutional trading workflows and trading / surveillance technology (e.g., order management systems, trade reporting, or surveillance platforms).</li>\n</ul>\n<ul>\n<li>Experience with data visualization, reporting, and analytics tools used to build dashboards and control reporting.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills; able to diagnose data and process issues and propose practical solutions.</li>\n</ul>\n<ul>\n<li>Excellent communication skills, with the ability to explain complex data and technical concepts to non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills and comfort working with cross-functional teams (Operations, Compliance, Technology, Front Office).</li>\n</ul>\n<ul>\n<li>Highly organized self-starter with the ability to prioritize, manage multiple tasks, and take end-to-end ownership.</li>\n</ul>\n<ul>\n<li>Detail-oriented and proactive in identifying, investigating, and resolving data or process issues.</li>\n</ul>\n<ul>\n<li>Basic to intermediate understanding of financial instruments and products across asset classes.</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d96db681-39c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Regulatory Reporting Ops","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955532760","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["SQL","Python","Data analysis","Automation","Scripting","Institutional trading workflows","Trading / surveillance technology","Data visualization","Reporting","Analytics","Analytical skills","Problem-solving skills","Communication skills","Interpersonal skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:21.906Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Data analysis, Automation, Scripting, Institutional trading workflows, Trading / surveillance technology, Data visualization, Reporting, Analytics, Analytical skills, Problem-solving skills, Communication skills, Interpersonal skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6722b21a-d69"},"title":"AI Product Manager - Infrastructure","description":"<p>The AI Product Manager role within Infrastructure at Millennium will lead AI-focused product enablement and adoption initiatives across our global organization.</p>\n<p>The ideal candidate will have a mix of technical experience in AI/ML/LLM products and APIs, process automation, and proficiency in survey design and data analytics.</p>\n<p>They will play a key role in identifying pain points in the organization, understanding the landscape of existing AI solutions in the market (or what can be built internally), matching solutions to those problems, and onboarding teams with new and existing AI tools to foster a culture of AI awareness and innovation across Millennium.</p>\n<p>Key Responsibilities:</p>\n<p>Awareness of AI Ecosystem and Trends - Use Claude Code / Codex / Cursor / etc. as well as Skills in these tools.</p>\n<p>Follow developments among major players in the commercial and open-source AI/LLM space, including Anthropic, OpenAI, Google, xAI, and others.</p>\n<p>Communication - Communicate clearly and effectively with internal and external vendors and stakeholders about requirements, feature requests, scope, expectations, priorities, product releases, and timelines.</p>\n<p>Generate clear, consistent, and accurate documentation about products, tradeoffs, decisions, and value proposition of various efforts.</p>\n<p>AI Product Enablement and Adoption - Conduct consulting-style engagements with technical and non-technical teams to onboard them with AI tools and products available within Millennium.</p>\n<p>Collaborate with users to understand pain points, needs, feature requests, and requirements to design and execute product development with usability and scalability in mind.</p>\n<p>Feedback Collection and Data Analytics - Analyze usage and feedback data with accuracy and quality using SQL and Python to identify trends, gaps, and opportunities for product improvements.</p>\n<p>Use insights from your analysis to refine product roadmaps and enablement initiatives to maximize impact across the firm.</p>\n<p>Required Skills/Qualifications:</p>\n<p>Bachelor’s degree or higher in Computer Science, Data Science, Engineering, or a related technical field.</p>\n<p>Must have technical knowledge and experience.</p>\n<p>5-7+ years of experience in a combination of AI/ML/LLM engineering, startup, technical education, consulting, data science, product analytics, or product management roles.</p>\n<p>Some technical / engineering / building experience is a hard requirement, e.g. you must be able to use the CLI and analyze data.</p>\n<p>Strong knowledge of AI/ML concepts, especially Large Language Models (LLMs).</p>\n<p>Understanding of their capabilities and failings as a technology.</p>\n<p>Proficiency in SQL and Python for data analysis and visualization.</p>\n<p>Proficiency in survey software and analytics tools (e.g. Qualtrics).</p>\n<p>Strong familiarity with effective SDLC and CI/CD principles.</p>\n<p>Proven ability to use clear judgment and organizational skills to manage complex, cross-functional products involving technical and non-technical teams.</p>\n<p>Intellectual curiosity and strong aptitude for prioritization.</p>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6722b21a-d69","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954012033","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["AI/ML/LLM products and APIs","Process automation","Survey design and data analytics","SQL and Python for data analysis and visualization","Survey software and analytics tools (e.g. Qualtrics)"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:18.605Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"AI/ML/LLM products and APIs, Process automation, Survey design and data analytics, SQL and Python for data analysis and visualization, Survey software and analytics tools (e.g. Qualtrics)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_78270c8d-016"},"title":"Operations Data Governance & Controls Specialist","description":"<p>As an Operations Control Specialist – Data Governance &amp; Controls, you will design, implement, and support technical data governance solutions with a focus on the firm&#39;s Trader Master and related reference data domains.</p>\n<p>This role requires a strong technical background in Data Management, Data Architecture, Data Lineage, Data Quality, Master Data Management (MDM), and automation within Financial Services and/or Technology.</p>\n<p>You will contribute to and help lead the technical design of data governance controls, data models, and integration patterns, partnering closely with Technology and Operations teams.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build/enhance data governance frameworks, controls, standards, and workflows (policies, definitions, entitlements).</li>\n<li>Create data quality rules and monitoring; automate exception detection, alerting, remediation, SLAs, and RCA.</li>\n<li>Develop Python/SQL/ETL-ELT automation for checks, controls, and reporting; deliver Tableau/Power BI dashboards and KPIs.</li>\n<li>Contribute to conceptual/logical/physical data modeling for Trader Master and core domains.</li>\n<li>Support MDM capabilities: golden record, matching/merging, survivorship, stewardship workflows; help shape MDM strategy.</li>\n<li>Implement access/entitlement governance (RBAC, row/column security) across DB/warehouse/BI with audit compliance.</li>\n<li>Maintain catalog, glossary, lineage, schema history, impact analysis; manage structured change workflows.</li>\n<li>Define integration patterns (batch/API/streaming) and build reconciliations/validations across systems.</li>\n<li>Manage historical/temporal data (validation, backfills, remediation) supporting regulatory/reporting/analytics.</li>\n<li>Produce technical documentation (designs, runbooks, data dictionaries), share knowledge, and mentor juniors.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Engineering, Information Systems, Mathematics, Finance, or related field; advanced degree (MS, MBA, or equivalent) is a plus.</li>\n<li>5–8 years of experience in financial services or fintech with hands-on work in data engineering, data management, or data architecture roles; exposure to trading strategies, fund structures, and financial products strongly preferred.</li>\n</ul>\n<p>Technical Expertise (Required):</p>\n<ul>\n<li>Strong Python and SQL; experience with data warehousing + ETL/ELT.</li>\n<li>Familiarity with MDM/data governance tools (e.g., Collibra, Informatica, Alation) and Tableau/Power BI.</li>\n<li>Proven ability to lead delivery, solve complex data issues, and communicate with technical/non-technical stakeholders.</li>\n<li>Preferred certs: DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering.</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p>The estimated base salary range for this position is $70,000 to $160,000, which is specific to New York and may change in the future.</p>\n<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_78270c8d-016","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ops & MO Control","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954926796","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$70,000 to $160,000","x-skills-required":["Python","SQL","ETL/ELT","Data Warehousing","Tableau/Power BI","MDM/data governance tools","Collibra","Informatica","Alation"],"x-skills-preferred":["DAMA/CDMP","cloud (AWS/Azure/GCP)","Scrum","BI/data engineering"],"datePosted":"2026-04-18T22:14:17.909Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, SQL, ETL/ELT, Data Warehousing, Tableau/Power BI, MDM/data governance tools, Collibra, Informatica, Alation, DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":160000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_55cce8b6-8ff"},"title":"Quantitative Researcher, Systematic Macro","description":"<p>A fast-growing, collaborative, and entrepreneurial systematic investment team is seeking a highly skilled Quantitative Researcher with expertise in systematic macro strategies.</p>\n<p>The ideal candidate will contribute to alpha research, signal development, and strategy implementation in a dynamic and fast-paced environment. This role offers significant career growth.</p>\n<p>Principal Responsibilities:</p>\n<p>Work closely with the Senior Portfolio Manager to develop systematic macro strategies, focusing on alpha research, including idea generation, data preprocessing, statistical analysis, backtesting, and implementation.</p>\n<p>Contribute to and enhance the internal research platform, including data pipelines, statistical learning tools, alpha analytics, and backtesting frameworks.</p>\n<p>Independently explore and develop new alpha ideas while collaborating in a transparent and team-oriented environment.</p>\n<p>Preferred Technical Skillset:</p>\n<p>Strong research and programming skills, with proficiency in Python.</p>\n<p>Solid experience with data analytics libraries (e.g., Pandas, SciPy, NumPy, Polars); extensive library-building experience is a plus.</p>\n<p>Masters or PhD degree in a quantitative subject such as Applied Mathematics, Statistics, Physics, Engineering, Financial Engineering, Computer Science, or related field from a top-ranked university. Strong candidates with Bachelor&#39;s degree will also be considered.</p>\n<p>Exceptional problem-solving abilities, intellectual curiosity (especially in alpha research), and a proactive research mindset.</p>\n<p>Creativity and out-of-the-box thinking, combined with rigorous quantitative analysis.</p>\n<p>Preferred Experience:</p>\n<p>2+ years of experience in quantitative research with a focus on systematic macro strategies.</p>\n<p>Preferred experience in hedge fund alpha research in commodities, FX, equity, and bond futures.</p>\n<p>Experience in macro intraday strategies is a strong plus.</p>\n<p>Experience in trading cost analysis is a plus.</p>\n<p>Experience in machine learning is a plus.</p>\n<p>Target Start Date:</p>\n<p>Up to 12 months (strong preference for candidates who can start sooner)</p>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $150,000 to $200,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_55cce8b6-8ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quant Strategies","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755943671775","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$150,000 to $200,000","x-skills-required":["Python","Pandas","SciPy","NumPy","Polars","Masters or PhD degree in a quantitative subject"],"x-skills-preferred":["data analytics libraries","library-building experience","problem-solving abilities","intellectual curiosity","proactive research mindset","creativity","rigorous quantitative analysis"],"datePosted":"2026-04-18T22:14:14.142Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Python, Pandas, SciPy, NumPy, Polars, Masters or PhD degree in a quantitative subject, data analytics libraries, library-building experience, problem-solving abilities, intellectual curiosity, proactive research mindset, creativity, rigorous quantitative analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af8ed06d-a9a"},"title":"Forward Deployed Software Engineer - Equities Technology","description":"<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>\n<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>\n<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>\n<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>\n<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>\n<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af8ed06d-a9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953439247","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Cloud computing (AWS/GCP)","DevOps","Infrastructure-as-code (Terraform)","CI/CD pipelines","Automation scripts","ML/AI proof-of-concepts","Data structures","Algorithms","System design"],"x-skills-preferred":["Experience in the financial services or fintech space","Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex","Experience with MLOps tooling and concepts","Cloud certifications (AWS or GCP)"],"datePosted":"2026-04-18T22:14:13.794Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_82f0539c-3ff"},"title":"Cross Asset Risk Research","description":"<p>The firm is looking for a quantitative researcher to join a new Cross Asset Risk team.</p>\n<p>The goal of the team is to build a unified set of risk data for decision-makers at the firm level to make informed decisions about the firm&#39;s complex set of positions. The team will be coordinating with multiple different asset-class risk teams to build the firm&#39;s high-level view, including building out individual asset-class risk analytics in cases where it is deemed necessary.</p>\n<p>This role involves research into using many different statistical and probabilistic techniques to evolve the firm&#39;s understanding of risk. Key to the role will be understanding the ways in which different market structures impact their individual asset classes, the behavior of large market participants, shared traits of popular trading strategies, and developing probabilistic methodologies to anticipate potential stress scenarios.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and validate cross-asset risk measures.</li>\n<li>Identify market factors across asset classes and identify common risk premia trades.</li>\n<li>Apply feature discovery and classification-style ML to identify and interpret portfolio/trade drivers with careful validation and robustness testing.</li>\n<li>Partner closely with asset class risk teams to test assumptions, interpret results, and drive adoption of the analytics.</li>\n<li>Develop forward-looking scenario models, identifying risks in the firm shared across asset classes.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>1–5 years of hands-on experience in quantitative research, modeling, or applied ML</li>\n<li>Strong foundation in applied mathematics / statistics / machine learning (especially probability theory, linear algebra, calculus, and statistics)</li>\n<li>Demonstrated ability to design, implement, and validate models from scratch (not just apply off-the-shelf packages)</li>\n<li>Python proficiency for research prototyping and analysis</li>\n<li>Experience with deep learning frameworks (For example, PyTorch/TensorFlow)</li>\n<li>Strong research habits: hypothesis formation, experimentation, back testing/validation, and clear communication</li>\n<li>Financial markets experience is helpful but not required</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_82f0539c-3ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954949488","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["quantitative research","modeling","applied ML","probability theory","linear algebra","calculus","statistics","Python","deep learning frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:07.581Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"quantitative research, modeling, applied ML, probability theory, linear algebra, calculus, statistics, Python, deep learning frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a75ea8b-5b4"},"title":"Application Security Engineer","description":"<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies, you will play a key role in building toolsets and processes to drive adoption of secure practices across the enterprise.</p>\n<p>The successful candidate will have a proven understanding in enterprise security and AI security and will focus on defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks</li>\n<li>Conducting specialized threat modeling, red teaming, and risk assessments for AI/ML models</li>\n<li>Leading risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects</li>\n<li>Engaging throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards</li>\n<li>Evangelizing AppSec and AI security best practices through developer education, training materials, and outreach</li>\n</ul>\n<p>Qualifications include:</p>\n<ul>\n<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field</li>\n<li>5+ years&#39; experience working as an Application Security Engineer, Software Engineer, or similar role</li>\n<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs</li>\n<li>Experience working with AI models, Agentic frameworks and security risks associated with AI</li>\n<li>Experience in working with global teams, collaborating on code and presentations</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>\n<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols</li>\n<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>\n<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions</li>\n<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>\n<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar</li>\n<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>\n<li>Familiarity with web application security testing tools and methodologies</li>\n<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>\n<li>Knowledge of Linux, OS internals and containers is a plus</li>\n<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous</li>\n</ul>\n<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a75ea8b-5b4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955629908","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI-specific risks","Generative AI","LLMs","Agentic frameworks","Security guardrails","Threat modeling","Red teaming","Risk assessments","Application risk assessments","Design reviews","Mitigation strategies","Secure coding standards","Developer education","Training materials","Outreach","Common SCM & CI/CD technologies","GitHub","Jenkins","Artifactory","Security Scanning","Vulnerability Management","Static and dynamic security analysis tools","SCA/SBOM solutions","Secrets Management & Password Vault technologies","Delinea Secret Server","Hashicorp Vault","Secure programming","Python","Java","C++","C#","Infrastructure as Code tools","CloudFormation","Terraform","Ansible","Web application security testing tools","Methodologies","Security frameworks","Standards","ISO 27001","NIST","OWASP","Linux","OS internals","Containers"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:06.620Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Developer education, Training materials, Outreach, Common SCM & CI/CD technologies, GitHub, Jenkins, Artifactory, Security Scanning, Vulnerability Management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets Management & Password Vault technologies, Delinea Secret Server, Hashicorp Vault, Secure programming, Python, Java, C++, C#, Infrastructure as Code tools, CloudFormation, Terraform, Ansible, Web application security testing tools, Methodologies, Security frameworks, Standards, ISO 27001, NIST, OWASP, Linux, OS internals, Containers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_08e2a9b0-d54"},"title":"Software Engineer","description":"<p>As a Software Engineer at Equity IT, you will be part of the Latency-Critical Trading team, which is building a best-in-class systematic data platform to power the next generation of low-latency systematic strategies.</p>\n<p>The team includes low-latency Linux, network, datacenter, and C++ engineers focused on our end-to-end trading stack.</p>\n<p>Key responsibilities include monitoring and assessing the quality of live and historical market data, detecting, inventorying, and remedying data gaps, maintaining and documenting exchange session times, holiday schedules, timestamp rules, and protocol/microstructure changes, analyzing latency, data rates, bursts, and message flows to understand microstructure behaviour and system performance, cleaning, transforming, and managing an inventory of large-scale datasets, building and improving tools for market data capture, working with vendors and brokers to assess and provision datasets, building and improving tools for data analysis, visualization, and diagnostics on top of captured market and network data, enhancing and extending C++ analytics libraries and exposing them within a Python environment for systematic research and alpha development, and collaborating closely with portfolio managers, quantitative researchers, and engineers to translate trading use cases into robust data and tooling solutions.</p>\n<p>Qualifications include a Bachelor&#39;s or Master&#39;s in Computer Science, Mathematics, Statistics, Engineering, or another quantitative field, or equivalent experience, 3+ years of experience in financial markets, electronic trading, or high-frequency/systematic environments, strong programming skills in Python, C++, and SQL, solid understanding of modern statistical testing methods and comfort working with large, noisy, real-world datasets, experience with Linux, large-scale data processing, and preferably network data (PCAP, timestamping, PTP) and low-latency systems, and strong problem-solving skills, attention to detail, and effective communication with both technical and non-technical stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_08e2a9b0-d54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955295716","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","C++","SQL","Linux","large-scale data processing","network data (PCAP, timestamping, PTP)","low-latency systems"],"x-skills-preferred":["R","MATLAB","SciPy stack","PyTorch"],"datePosted":"2026-04-18T22:14:02.001Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, Karnataka, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, C++, SQL, Linux, large-scale data processing, network data (PCAP, timestamping, PTP), low-latency systems, R, MATLAB, SciPy stack, PyTorch"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b717f25a-5b3"},"title":"Data Operations Analyst - Systematic Data Platform","description":"<p>We are building a world-class systematic data platform that will power the next generation of our systematic portfolio engines.</p>\n<p>The systematic data group is looking for a Data Operations Analyst to join our growing team. The team consists of content specialists, data scientists, analysts, and engineers who are responsible for discovering, maintaining, and analysing sources of alpha for our portfolio managers.</p>\n<p>This is an opportunity for individuals who have a strong background in quantitative investing and are passionate about working with data.</p>\n<p>The role builds on individual&#39;s knowledge and skills in four key areas of quantitative investing: data, statistics, technology, and financial markets.</p>\n<p><strong>Principal Responsibilities</strong></p>\n<ul>\n<li>Efficiently monitor data flows across various systems, ensuring accuracy, completeness, and timeliness.</li>\n<li>Maintain and enhance the functionality and efficiency of our in-house data monitoring systems.</li>\n<li>Recommend and implement improvements to optimise data processing and quality.</li>\n<li>Design, build, and manage efficient and scalable data ingestion and ETL pipelines. Ensure smooth data flow from various sources into our core systems.</li>\n<li>Liaise with stakeholders across the organisation to understand their data requirements and support their initiatives.</li>\n<li>Actively engage with data issues in our production operation environment and aim to provide high-quality support on solving the issue both internally or with vendors.</li>\n</ul>\n<p><strong>Qualifications/Skills Required</strong></p>\n<ul>\n<li>Master&#39;s or Bachelor&#39;s in computer science, mathematics, statistics, or other field with good coding skills.</li>\n<li>2+ years of financial industry experience preferred.</li>\n<li>Programming expertise in Python, C++, Java, or C#.</li>\n<li>Programming skills in SQL, PL-SQL, or T-SQL.</li>\n<li>Strong problem-solving skills.</li>\n<li>Strong communication skills.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b717f25a-5b3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955700474","x-work-arrangement":null,"x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","C++","Java","SQL","PL-SQL","T-SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:54.703Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hong Kong, Hong Kong"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"Python, C++, Java, SQL, PL-SQL, T-SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c70414d-4e6"},"title":"Full‑Stack data engineer","description":"<p>We are seeking a highly self-sufficient, motivated engineer with strong full-stack data engineering skills to join our team. This is a remote/offshore role that requires autonomy, excellent communication, and the ability to deliver high-quality work with limited supervision while collaborating with a predominantly US-based team.</p>\n<p>You will build reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting,working end-to-end from data ingestion and transformation through to UI. Our Python-based data platform is undergoing a major evolution toward a modern, cloud-native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment.</p>\n<p>This role will be pivotal in independently designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives,delivering clean, reliable, and well-modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely (primarily via remote channels) with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Remote collaboration &amp; communication: Operate effectively as an offshore member of a distributed team, proactively communicating status, risks, and blockers across time zones and coordinating overlap with US working hours as needed.</li>\n</ul>\n<ul>\n<li>Full-stack data engineering: Build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites, delivering production-grade solutions with minimal hand-holding.</li>\n</ul>\n<ul>\n<li>Autonomous delivery &amp; ownership: Take end-to-end ownership of features and projects,clarifying requirements, breaking work into milestones, estimating timelines, and delivering high-quality, well-documented solutions.</li>\n</ul>\n<ul>\n<li>Specification and design: Translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable technical specifications and design documents.</li>\n</ul>\n<ul>\n<li>Code quality: Write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.</li>\n</ul>\n<ul>\n<li>Continuous improvement: Contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements, continuously suggesting better tools, patterns, and automation.</li>\n</ul>\n<p>Required Skills &amp; Experience</p>\n<ul>\n<li>Professional experience: 5+ years in software engineering, with a full-stack background building complex, scalable data-engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.</li>\n</ul>\n<ul>\n<li>Modern data engineering: Strong experience with medallion data architecture patterns using data warehouse technologies (e.g., Snowflake), data transformation tooling (e.g., dbt), BI tooling, and NoSQL data marts (e.g., Elasticsearch/OpenSearch).</li>\n</ul>\n<ul>\n<li>Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.</li>\n</ul>\n<ul>\n<li>Remote work &amp; autonomy: Proven track record working in a remote or distributed environment, demonstrating self-motivation, reliable execution, and the ability to make sound technical decisions independently.</li>\n</ul>\n<ul>\n<li>Agile methodology: Working knowledge of Agile development practices and workflows (e.g., sprint planning, stand-ups, retrospectives) in a distributed team setting.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.</li>\n</ul>\n<p>Preferred Skills &amp; Experience</p>\n<ul>\n<li>Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.</li>\n</ul>\n<ul>\n<li>Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.</li>\n</ul>\n<ul>\n<li>Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.</li>\n</ul>\n<ul>\n<li>Front-end expertise: Experience building user interfaces with Angular or a modern UI stack.</li>\n</ul>\n<ul>\n<li>Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c70414d-4e6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955321460","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Snowflake","dbt","AWS","Terraform","modern UI technologies","data warehouse technology","SQL","unit testing","CI/CD automation","quality assurance processes"],"x-skills-preferred":["machine learning","AI","large language models","agentic frameworks","ELK stack","search and analytics solutions","cloud expertise","AWS cloud services","SageMaker","CI/CD tooling","front-end expertise","Angular","financial domain knowledge"],"datePosted":"2026-04-18T22:13:54.584Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, Karnataka, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Snowflake, dbt, AWS, Terraform, modern UI technologies, data warehouse technology, SQL, unit testing, CI/CD automation, quality assurance processes, machine learning, AI, large language models, agentic frameworks, ELK stack, search and analytics solutions, cloud expertise, AWS cloud services, SageMaker, CI/CD tooling, front-end expertise, Angular, financial domain knowledge"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_87749959-700"},"title":"Intern Data Engineering (all genders)","description":"<p>Join our Data Engineering team inside the Business Intelligence department, where you&#39;ll work with experienced engineers to build the data foundation that powers Holidu&#39;s growth.</p>\n<p>As an intern, you&#39;ll get hands-on experience with real problems and have the opportunity to make a meaningful impact. You&#39;ll work on building and supporting data pipelines, digging into data quality, getting hands-on with cloud infrastructure, and exploring AI-assisted development.</p>\n<p>Our team uses a range of technologies, including Redshift, Athena, DuckDB, Terraform, Docker, Jenkins, ELK, Grafana, Looker, OpsGenie, Kafka, Airbyte, and Fivetran. You&#39;ll have the chance to learn from experienced engineers and contribute to the development of our data systems.</p>\n<p>In this role, you&#39;ll be part of a team that genuinely loves what they do and is passionate about building a better data foundation for Holidu. You&#39;ll have the opportunity to take responsibility from day one and develop through regular feedback.</p>\n<p>We offer a fair salary, the chance to make a difference for hundreds of thousands of monthly users, and the opportunity to grow and develop through regular feedback. You&#39;ll also have access to a range of benefits, including a hybrid work policy, the chance to work from other local offices, and a corporate subscription to Urban Sports Club or a premium gym membership at a discounted rate.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_87749959-700","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2557398","x-work-arrangement":"hybrid","x-experience-level":"intern","x-job-type":"Internship","x-salary-range":null,"x-skills-required":["Python","SQL","Git","Airflow","dbt","Docker","Cloud platform (AWS, GCP, etc.)"],"x-skills-preferred":["LLM tools","AI-assisted coding"],"datePosted":"2026-04-18T22:13:52.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Git, Airflow, dbt, Docker, Cloud platform (AWS, GCP, etc.), LLM tools, AI-assisted coding"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_46a2bf10-599"},"title":"Quantitative Developer (KDB & Python) -  Central Liquidity Strategies","description":"<p>We are seeking a highly driven, results-oriented Quantitative Developer to join a new and dynamic group tasked with developing our next-generation quantitative research platform.</p>\n<p>Based in New York, the successful candidate will have strong analytical and problem-solving skills, excellent attention to detail, and the ability to explain sophisticated technical concepts clearly and concisely.</p>\n<p>The role requires high autonomy, as much of the senior technical team is based in Dublin and the New York hire will be relied upon heavily in-region.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Contribute to a wide range of projects and deliver quickly and iteratively.</li>\n<li>Write, support, maintain, and test code following best practices, including unit testing, documentation, and automation within standard CI/CD processes.</li>\n<li>Support key datasets (live and historical), ML models, and the supporting infrastructure spanning multiple technologies, languages, and systems.</li>\n<li>Partner with team members to set the overall direction, design, and architecture of the platform; collaborate with key stakeholders across the business.</li>\n</ul>\n<p>Qualifications / Skills Required:</p>\n<ul>\n<li>6+ years of kdb+ and Python experience in a quantitative finance setting, with a proven track record of deploying systems at scale.</li>\n<li>Bachelor’s degree in Mathematics, Computer Science, Financial Engineering, Operations Research, or similar.</li>\n<li>Fluency with enterprise-grade technology used for research and trading analytics; ability to operate independently.</li>\n<li>Strong communication skills and the ability to work effectively in a team environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_46a2bf10-599","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Trading Solutions","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954578859","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["kdb+","Python","enterprise-grade technology","research and trading analytics","unit testing","documentation","automation","CI/CD processes","key datasets","ML models","supporting infrastructure"],"x-skills-preferred":["PyKX","C++","cash equities","live analytics","cloud tooling"],"datePosted":"2026-04-18T22:13:50.318Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"kdb+, Python, enterprise-grade technology, research and trading analytics, unit testing, documentation, automation, CI/CD processes, key datasets, ML models, supporting infrastructure, PyKX, C++, cash equities, live analytics, cloud tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7c16f4e7-af6"},"title":"AI Engineer","description":"<p>We are seeking an experienced AI Engineer to join our core AI engineering team. The successful candidate will be responsible for building and maintaining AI products that ingest unstructured contracts, extract key terms into structured data, and provide a front-end with monitoring and controls for day-to-day operations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build the core application and workflow agents for Market Data Operations in Python; integrate with AWS and internal systems like the Market Data Warehouse.</li>\n<li>Ingest and understand contracts at scale, using LLMs to extract costs, fee schedules, entitlements, renewal terms, and payment details.</li>\n<li>Connect the dots between contracts, entitlements, invoices, and payments so Ops, Legal, and Finance can see a single &#39;source of truth&#39; and catch issues early.</li>\n<li>Design and tune LLM workflows (prompt engineering, tool/MCP integration, structured outputs) for contract Q&amp;A, summarization, and exception flagging.</li>\n<li>Own monitoring and controls for the AI system: logging, metrics, guardrails, and human-in-the-loop review to keep performance, reliability, and quality high.</li>\n<li>Work directly with stakeholders (Market Data Ops, analysts, Legal, Finance/AP) to understand their workflows and quickly iterate on features that actually get used.</li>\n</ul>\n<p>Required Skills &amp; Experience:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>5+ years of professional experience with Python, including building production services (Django, Flask, or FastAPI).</li>\n<li>Experience working with unstructured documents (contracts, PDFs, legal docs) and turning them into structured data.</li>\n<li>Prompt engineering and working with structured JSON outputs</li>\n<li>Comfort wiring models into real applications (tool/MCP-style integrations, APIs).</li>\n<li>Experience using cloud platform, ideally AWS.</li>\n<li>Able to define and track quantitative metrics for AI features (accuracy, latency, cost, etc.).</li>\n<li>Strong communication skills and comfortable working directly with non-technical users.</li>\n<li>Enjoys a start-up-like environment inside a large firm: small team, high ownership, fast iteration.</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience building AI solutions in financial services, especially around market data, vendor management, or legal/contract workflows.</li>\n<li>Familiarity with entitlements/governance and large internal data platforms (e.g., a Market Data Warehouse).</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7c16f4e7-af6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT - Artificial Intelligence","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955349680","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Python","AWS","LLMs","Structured JSON outputs","Cloud platform","Quantitative metrics","Strong communication skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:47.828Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, AWS, LLMs, Structured JSON outputs, Cloud platform, Quantitative metrics, Strong communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3b63dd5-0f6"},"title":"Backend utvecklare","description":"<p>We are seeking an experienced backend developer to join our tech team. As a backend developer, you will be responsible for designing, developing, and maintaining the server-side of our applications and systems. You will work closely with our frontend developers, designers, and product owners to ensure a seamless integration between frontend and backend.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop scalable and efficient backend solutions for our digital platforms.</li>\n<li>Write clean, readable, and reusable code.</li>\n<li>Perform unit testing and debugging to ensure high quality and reliability.</li>\n<li>Participate in technical discussions and contribute ideas to improve the product&#39;s performance and functionality.</li>\n<li>Collaborate with frontend developers and other team members to ensure a smooth user experience.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience in backend development with a focus on web applications.</li>\n<li>Good knowledge of programming languages such as Python, Java, or similar.</li>\n<li>Experience working with frameworks such as Django, Flask, Spring, or similar.</li>\n<li>Familiarity with database management systems such as MySQL, PostgreSQL, or similar.</li>\n<li>Knowledge of API design and implementation.</li>\n<li>Strong problem-solving skills and ability to work independently as well as in a team.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Attractive salary based on experience and competence.</li>\n<li>Opportunity to work with exciting projects and the latest technology.</li>\n<li>Flexible working hours and possibility of remote work.</li>\n<li>Continuous professional development and opportunities for career growth.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3b63dd5-0f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scandinavian Airlines","sameAs":"https://scandinavianairlines.teamtailor.com","logo":"https://logos.yubhub.co/scandinavianairlines.teamtailor.com.png"},"x-apply-url":"https://scandinavianairlines.teamtailor.com/jobs/4882026-backend-utvecklare","x-work-arrangement":"On-site","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend development","web applications","Python","Java","Django","Flask","Spring","MySQL","PostgreSQL","API design","problem-solving"],"x-skills-preferred":["cloud services","AWS","Google Cloud","Azure"],"datePosted":"2026-04-18T22:13:45.980Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Transportation","skills":"backend development, web applications, Python, Java, Django, Flask, Spring, MySQL, PostgreSQL, API design, problem-solving, cloud services, AWS, Google Cloud, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ad717304-da7"},"title":"Intern Data Analytics (all genders)","description":"<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>\n<p>This internship provides a great opportunity to gain hands-on experience into Data Analytics. You will work alongside a team of highly skilled and dedicated professionals who are committed to offering strong mentorship and guidance to help you start your career in the field of data.</p>\n<p>Duration: 6 months. Location: Munich, 2-3 office days per week.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>\n<li>Data Pipelines: Airflow, DBT.</li>\n<li>Data Visualization: Looker.</li>\n<li>Data Analytics: SQL, Python.</li>\n<li>Collaboration: Git, Atlassian.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Analytics Intern at Holidu, you’ll help our company make smarter, data-driven decisions, while being supported by a Senior Analyst.</p>\n<p>This role goes beyond building dashboards. We want curious, proactive people who want to become data advisors - not only delivering reports, but understanding the business context, which questions they answer and why they matter.</p>\n<ul>\n<li>Collect, analyse, and interpret large datasets to help solve real business challenges.</li>\n<li>Build dashboards and reports using tools like SQL, Python, and Looker.</li>\n<li>Collaborate closely with teams such as Product, Marketing, or Finance to help them extract actionable insights from data.</li>\n<li>Build and improve data pipelines using cutting-edge technologies.</li>\n<li>We are an AI-first team. Rather than manually executing repetitive tasks, you will use AI to work smarter and automate workflows.</li>\n<li>You’ll collaborate with our Data Scientists and get exposure to:</li>\n<li>Data preparation and exploratory data analysis.</li>\n<li>How ML-models are built, evaluated, and deployed in real-life.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>Currently enrolled in or recently completed a Bachelor’s or Master’s degree in a quantitative field (e.g., Business Analytics, Data Science, Economics, Statistics, Mathematics, Engineering or similar).</li>\n<li>Understanding of SQL and Python, proficiency in Excel/Google Sheets and a desire to learn visualization tools like Looker.</li>\n<li>Knowledge of Machine Learning and Statistical models is a plus.</li>\n<li>Strong analytical and problem-solving skills, and attention to detail.</li>\n<li>Curiosity to learn and a passion for solving data problems.</li>\n<li>Good communication and presentation skills.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Compensation: Get a fair salary.</li>\n<li>Impact: Make a difference for hundreds of thousands of monthly users.</li>\n<li>Growth: Take responsibility from day one and develop through regular feedback.</li>\n<li>Community: Engage with international, diverse, yet like-minded colleagues through regular events and 2 office days per week with your team.</li>\n<li>Flexibility: Benefit from our hybrid work policy and the chance to work from other local offices for up to 8 weeks a year.</li>\n<li>Fitness: Get a Urban Sports Club corporate subscription or a premium gym membership at a discounted rate.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ad717304-da7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2556233","x-work-arrangement":"hybrid","x-experience-level":"intern","x-job-type":"Internship","x-salary-range":null,"x-skills-required":["SQL","Python","Looker","Git","Atlassian","Airflow","DBT","AWS Stack","Redshift","Athena","Glue","S3"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:45.423Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Looker, Git, Atlassian, Airflow, DBT, AWS Stack, Redshift, Athena, Glue, S3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc9213ff-135"},"title":"(Senior) Team Lead Marketing Analytics (all genders)","description":"<p>Within the Marketing Technology department, we are building a new Marketing Analytics team and are looking for a Team Lead to shape it from the ground up.</p>\n<p>You&#39;ll work closely with a wide range of Marketing stakeholders, ensuring they have the data, tools, and insights they need to drive sustainable growth. Moreover, you will also collaborate with data scientists and data engineers within the department to build best-in-class analytical solutions.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>\n<li>Data Pipelines: Airflow, DBT.</li>\n<li>Data Visualization: Looker.</li>\n<li>Data Analytics: SQL, Python.</li>\n<li>Collaboration: Git, Jira, Confluence, Slack.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<ul>\n<li>You&#39;ll be leading data analysts and collaborating cross-functionally with data engineers and data scientists - fostering collaboration, learning, and analytical excellence.</li>\n<li>Engage with senior marketing leadership on strategic projects, providing insights that influence channel strategy and budget decisions, and ultimately our revenue growth.</li>\n<li>Translate marketing logic, for a diverse range of channels (e.g. Performance Marketing, SEO, CRM, affiliate) and use cases into analytical requirements and communicate complex findings clearly to both technical and commercial teams.</li>\n<li>Support and partner with Marketing Technology on tracking, event design, and data flows to ensure data quality and reliable reporting frameworks.</li>\n<li>Not shying away from hands-on work as an individual contributor (50% to start), while leading the team, diving deep into the details when needed.</li>\n<li>Shape the future of marketing analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>5+ years multi-channel marketing analytics experience in a B2B or B2C organisation where marketing is a core performance driver, with extensive hands-on expertise in at least one of the following: attribution, cost and revenue allocation, or bidding.</li>\n<li>People management experience - this should not be your first leadership role.</li>\n<li>A collaborative mindset with clear experience communicating with executive stakeholders and senior decision makers.</li>\n<li>You are mission-driven, with a working backwards mentality (i.e. starting with customer needs) and clear experience managing and delivering complex projects with multiple stakeholders. Ability to translate business goals into analytical solutions and break down complex topics into actionable insights.</li>\n<li>Excellent analytical and technical skills. Concretely: strong in SQL, Python (or similar), data visualisation skills as well as developing technical frameworks to serve a clear business need.</li>\n<li>A strong personal or team focus on AI enablement: you actively use AI tools to enhance your coding, planning, and workflows, and can enable your team to do the same.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>\n<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc9213ff-135","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2458940","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["AWS Stack","Airflow","DBT","Looker","SQL","Python","Git","Jira","Confluence","Slack"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:45.213Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"AWS Stack, Airflow, DBT, Looker, SQL, Python, Git, Jira, Confluence, Slack"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_178bbafc-a95"},"title":"Quantitative Trader - Central Liquidity Strategies","description":"<p>We are seeking a Quantitative Trader to join our Central Liquidity Strategies team. As a Quantitative Trader, you will lead a wide range of projects involving the design and implementation of strategies to reduce trading costs for delta-one and factor products. You will also monitor and manage risks within company guidelines and risk parameters, including operational, portfolio, financing, and basis risk.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead a wide range of projects involving the design and implementation of strategies to reduce trading costs for delta-one and factor products.</li>\n<li>Monitor and manage risks within company guidelines and risk parameters, including operational, portfolio, financing, and basis risk.</li>\n<li>Partner with team members to set the overall direction, design, and architecture of the platform; collaborate with key stakeholders across the business.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>6+ years in a trading or execution role.</li>\n<li>Bachelor&#39;s degree in Mathematics, Physics, Finance, Economics, Econometrics, Financial Engineering, Operations Research, or similar.</li>\n<li>Experience with factor modeling, transaction cost analysis (TCA) models, statistical modeling, and portfolio analytics.</li>\n<li>Strong operational and event risk management skills; experience managing systematic strategies; familiarity with equity markets/asset classes.</li>\n<li>Familiarity with ETFs, futures, swaps, and vanilla derivatives is a plus (and can be learned on the desk).</li>\n<li>Self-sufficient programming ability in Python and/or kdb+ for analysis and research, plus Git, Unix/Linux, Bash, etc.</li>\n<li>Strong communication skills and the ability to work effectively in a team environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_178bbafc-a95","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synthetic Products Book","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755942806171","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["factor modeling","transaction cost analysis (TCA) models","statistical modeling","portfolio analytics","operational risk management","event risk management","equity markets/asset classes","ETFs","futures","swaps","vanilla derivatives","Python","kdb+","Git","Unix/Linux","Bash"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:42.537Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"factor modeling, transaction cost analysis (TCA) models, statistical modeling, portfolio analytics, operational risk management, event risk management, equity markets/asset classes, ETFs, futures, swaps, vanilla derivatives, Python, kdb+, Git, Unix/Linux, Bash","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32932504-2b5"},"title":"Systematic Production Support Engineer","description":"<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>\n<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>\n<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>\n<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>\n<li>Implementation of automated systems and processes focused on trading and operations.</li>\n<li>Streamlining development and deployment processes.</li>\n<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>\n</ul>\n<p>Technical Qualification:</p>\n<ul>\n<li>5+ years of development experience in Python.</li>\n<li>Experience working in a Linux / Unix environment.</li>\n<li>Experience working with PostgreSQL or other relational databases.</li>\n<li>Ability to understand and discuss requirements from portfolio managers.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>\n<li>Experience operating and monitoring low-latency trading environments.</li>\n<li>Familiarity with quantitative finance and electronic trading concepts.</li>\n<li>Familiarity with financial data.</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>\n<li>Experience with Apache / Confluent Kafka.</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>\n<li>Experience with containerization and orchestration technologies.</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>\n<li>Contributions to open-source projects.</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32932504-2b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954627501","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","Linux / Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models"],"x-skills-preferred":["Apache / Confluent Kafka","C/C++","Java","Scala","Go","C#","containerization","orchestration technologies","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:42.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America · Old Greenwich, Connecticut, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64bb6566-575"},"title":"Senior ‘Developer Infrastructure’ Engineer","description":"<p>The GALAXY Platform Execution &amp; Exchange Data (SPEED) Team is a core part of Millennium&#39;s technology organisation, powering the firm&#39;s lowest-latency solutions for systematic and high-frequency trading.</p>\n<p>SPEED delivers the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>\n<p>As a Senior Developer Infrastructure Engineer on SPEED, you will own and evolve the build and CI/CD infrastructure that underpins these mission-critical systems.</p>\n<p>By designing scalable build pipelines, shared tooling, and reliable release workflows, you will directly enhance developer productivity and enable fast, safe iteration on some of the firm&#39;s most performance-sensitive code.</p>\n<p>This role offers the opportunity to shape core engineering practices while contributing to platforms that are central to Millennium&#39;s trading edge.</p>\n<p>Principal Responsibilities</p>\n<ul>\n<li>Design, build, and maintain a highly scalable, parallel, and cached build system for a large, performance-sensitive codebase.</li>\n</ul>\n<ul>\n<li>Own and continually optimise CI/CD pipelines to minimise build/test times, reduce flakiness, and improve developer productivity.</li>\n</ul>\n<ul>\n<li>Operate with an AI-first mindset across the SDLC, using automation by default to streamline build, test, and release workflows.</li>\n</ul>\n<ul>\n<li>Integrate and operationalise AI tools (e.g., copilots, workflow automation, AI-driven analytics) to eliminate manual toil, accelerate development, and codify reusable AI-enabled patterns for the broader engineering organisation.</li>\n</ul>\n<ul>\n<li>Design and operate containerised environments (e.g., Docker, Kubernetes) to maximise utilisation, reliability, and scalability across environments.</li>\n</ul>\n<ul>\n<li>Implement and manage artifact storage, dependency management, and versioning strategies for large, distributed systems.</li>\n</ul>\n<ul>\n<li>Develop and maintain shared libraries, CLIs, scripts, and internal platforms that reduce friction and enable self-service for engineers.</li>\n</ul>\n<ul>\n<li>Build and enhance test suites and environment provisioning, leveraging AI and automation where appropriate for smarter checks, triage, and observability.</li>\n</ul>\n<ul>\n<li>Monitor, instrument, and improve the reliability, observability, and performance of build and CI/CD systems using metrics, dashboards, and alerting.</li>\n</ul>\n<ul>\n<li>Partner with trading and engineering teams to understand requirements, remove friction, and champion best practices for building, testing, and releasing software.</li>\n</ul>\n<p>Qualifications/Skills Required</p>\n<ul>\n<li>5+ years of software engineering or DevInfra/Platform/DevOps experience, with significant focus on building systems and CI/CD.</li>\n</ul>\n<ul>\n<li>Strong programming skills in one or more languages (e.g., Python, Rust, Go, C++) for automation and tooling.</li>\n</ul>\n<ul>\n<li>Hands-on experience with at least one modern build system (e.g., Bazel, Buck2).</li>\n</ul>\n<ul>\n<li>Solid understanding of source control (Git), branching strategies, and release management.</li>\n</ul>\n<ul>\n<li>Experience with monorepos is a plus.</li>\n</ul>\n<ul>\n<li>Experience scaling build and test infrastructure for growing codebases and teams (parallelization, test sharding, remote execution, caching).</li>\n</ul>\n<ul>\n<li>Experience designing or participating in processes, systems, or playbooks that leverage AI to streamline work rather than needing to add more headcount to the team.</li>\n</ul>\n<ul>\n<li>Familiarity with containers and cloud infrastructure (Docker, Kubernetes, and major cloud providers such as AWS/GCP/Azure).</li>\n</ul>\n<ul>\n<li>Strong communication and collaboration skills; comfortable partnering with multiple teams and driving cross-cutting initiatives.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64bb6566-575","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954695574","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Python","Rust","Go","C++","Bazel","Buck2","Git","Kubernetes","Docker","AWS","GCP","Azure"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:29.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Go, C++, Bazel, Buck2, Git, Kubernetes, Docker, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6690b2fa-cab"},"title":"(Senior) Team Lead Data Analytics (all genders)","description":"<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>\n<p>This role is on-site in Munich, with two office days per week.</p>\n<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>\n<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>\n<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>\n<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>\n<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>\n<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>\n<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>\n<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>\n<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>\n<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>\n<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>\n<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>\n<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>\n<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6690b2fa-cab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2598226","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Database: AWS Stack (Redshift, Athena, Glue, S3)","Data Pipelines: Airflow, dbt","Data Visualisation: Looker","Data Analytics: SQL, Python","Collaboration: Git, Jira, Confluence, Slack"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:28.264Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Technology","industry":"Travel Technology","skills":"Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_566c8778-7f9"},"title":"Quantitative Developer (Python) -  Central Liquidity Strategies","description":"<p>We are seeking a highly driven, results-oriented Senior Quantitative Developer to join a dynamic group tasked with developing our next-generation alpha research pipeline, encompassing data ingestion to model evaluation and reporting.</p>\n<p>The successful candidate will be expected to:</p>\n<ul>\n<li>Help design and contribute to the alpha research platform</li>\n<li>Support, maintain, and test their own code following best practices, including unit testing, regression testing, documentation, and automation within typical CI processes</li>\n<li>Provide leadership and vision to help determine the overall direction, design, and architecture of the alpha research pipeline</li>\n<li>Mentor junior resources</li>\n<li>Regularly interact with quantitative researchers and other stakeholders, and prioritise and implement features</li>\n</ul>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>5+ years of Python experience in a quantitative finance setting</li>\n<li>Familiarity with linear models and basic statistics for creating model evaluation and reporting workflows</li>\n<li>Familiarity with the Python data science ecosystem, including dashboarding and popular ML libraries such as Plotly, Altair, JAX, TensorFlow, and PyTorch</li>\n<li>Prior experience building alpha research or machine learning pipelines</li>\n<li>Highly analytical with strong problem-solving skills and attention to detail</li>\n<li>Strong communication skills, with the ability to explain technical and sophisticated concepts clearly and concisely</li>\n<li>Ability to tune and debug runtime performance of data applications</li>\n<li>Familiarity with C++/Rust/CUDA to debug and profile underlying native code in ML libraries (Nice to have)</li>\n</ul>\n<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_566c8778-7f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Central Execution Book","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954183338","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["Python","linear models","basic statistics","Plotly","Altair","JAX","TensorFlow","PyTorch","C++/Rust/CUDA"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:25.204Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, linear models, basic statistics, Plotly, Altair, JAX, TensorFlow, PyTorch, C++/Rust/CUDA","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c2995faa-123"},"title":"Software Engineer – Equity Derivatives Pricing & Risk System","description":"<p>We are seeking a highly skilled Java Developer with a strong background in Equity Derivatives to join our team in London.</p>\n<p>In this role, you will play a pivotal part in building and enhancing Equity Volatility Risk and P&amp;L system that supports our Equity Volatility Managers.</p>\n<p>This is an exciting opportunity to work in a fast-paced hedge fund environment, where your contributions will directly impact trading performance and risk management capabilities.</p>\n<p>The ideal candidate will bring a combination of technical expertise and business domain knowledge for developing robust, scalable systems.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Design, develop, and implement a robust risk system for Equity Volatility trading strategies.</li>\n<li>Build and maintain scalable, high-performance server-side application using Java and Spring Boot frameworks.</li>\n<li>Build and integrate exotic pricing models to handle pricing and lifecycle of the product.</li>\n<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>\n<li>Proactively address system bottlenecks and implement solutions to ensure the platform remains robust.</li>\n<li>Conduct code reviews and implement automated testing to ensure the reliability and quality of the system.</li>\n<li>Write clean, maintainable, and testable code, adhering to best practices in software engineering.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>Proficiency in Java development with experience in building scalable, high-performance systems.</li>\n<li>Strong knowledge of Spring Boot and its ecosystem for developing microservices.</li>\n<li>Experience with Python for scripting and automation.</li>\n<li>Experience in distributed caching technologies (e.g. Ignite, or similar).</li>\n<li>Familiarity with containerization technologies (e.g. Podman, Kubernetes) and cloud computing platforms (e.g. AWS).</li>\n<li>Solid understanding of software development best practices, including version control (e.g. Git), CI/CD pipelines, and automated testing frameworks.</li>\n<li>Previous experience working with Equity Derivatives in a sell-side or buy-side firm.</li>\n<li>Strong understanding of equity derivative products such as options and futures.</li>\n<li>Some understanding of structured products in terms of pricing, lifecycle, and risk characteristics.</li>\n<li>Strong problem-solving skills and the ability to work effectively in a fast-paced, high-pressure environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c2995faa-123","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955392398","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","Python","Distributed caching technologies","Containerization technologies","Cloud computing platforms","Version control","CI/CD pipelines","Automated testing frameworks","Equity Derivatives"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:24.304Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Spring Boot, Python, Distributed caching technologies, Containerization technologies, Cloud computing platforms, Version control, CI/CD pipelines, Automated testing frameworks, Equity Derivatives"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6e7c226-e8c"},"title":"Technical Lead, MFT MDE Analytics Engineering","description":"<p>The SPEED Market Data team at Equity IT is seeking a hands-on Technical Lead to own and drive a critical workstream focused on architecting, implementing, monitoring, and supporting low-latency C++ systems. As a Technical Lead, you will shape the future of the industry by working alongside exceptional engineers and strategists to solve significant engineering problems.</p>\n<p>We are looking for a strong technical leader with financial markets technology experience and real-time market data expertise to design, build, and support our global real-time market data platform. This role emphasizes technical leadership, architectural ownership, and cross-team coordination rather than people management.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Act as the technical owner for a major market data workstream, setting technical direction, defining architecture, and driving execution across the full lifecycle.</li>\n<li>Collaborate with hardware and software teams across divisions to design and build real-time market data processing and distribution systems.</li>\n<li>Lead and drive new technical initiatives for the team, including evaluating technologies, defining standards, and establishing best practices.</li>\n<li>Design and develop systems, interfaces, and tools for historical market data and trading simulations that increase research productivity.</li>\n<li>Architect and implement components of an enterprise market data platform, including components for caching, aggregation, conflation and value-added data enrichment.</li>\n<li>Optimise platform performance using network and systems programming, and advanced low-latency techniques (CPU, NIC, kernel, and application-level tuning).</li>\n<li>Lead the design and maintenance of automated test and benchmark frameworks, and tools for risk management, performance tracking, and system validation.</li>\n<li>Provide technical leadership for the support and operation of both enterprise real-time market data environments, including coordinating internal, vendor, and exchange-driven changes.</li>\n<li>Design and engineer components to automate support and management of the market data platform, including monitoring, real-time and historical metrics collection/visualisation, and self-service administrative/user tools.</li>\n<li>Serve as a primary technical liaison for users of the market data environment (Portfolio Managers, trading desks, and core technology teams), translating requirements into robust technical solutions.</li>\n<li>Lead the enhancement of processes and workflows for operating the market data platform (release/deployment, incident management and remediation, exchange notification handling, defining and enforcing SLAs).</li>\n<li>Mentor and influence other engineers through code reviews, design reviews, and hands-on guidance, fostering a culture of technical excellence and accountability.</li>\n</ul>\n<p>Qualifications / Skills Required:</p>\n<ul>\n<li>Degree in Computer Science or a related field with a strong background in data structures, algorithms, and object-oriented programming in modern C++.</li>\n<li>Deep understanding of Linux system internals and networking, especially in low-latency and high-throughput environments.</li>\n<li>Strong knowledge of CPU architecture and the ability to leverage CPU capabilities for performance optimisation.</li>\n<li>Demonstrated experience acting as a technical lead or senior engineer owning complex systems or workstreams end-to-end (design, delivery, and operations).</li>\n<li>Able to prioritise and make trade-offs in a fast-moving, high-pressure, constantly changing environment; strong sense of urgency, ownership, and follow-through.</li>\n<li>Strong belief in and practice of extreme ownership, with a track record of taking accountability for systems in production.</li>\n<li>Effective communication and stakeholder management skills: able to work closely with business and technology users, understand their needs, and drive appropriate technical solutions.</li>\n<li>Experience building solutions on cloud environments such as GCP and AWS.</li>\n<li>Knowledge of additional programming languages such as Java, Python, or scripting (Perl, shell).</li>\n<li>Technical background in application development on complex market data systems (e.g., Bloomberg, Thomson Reuters, etc.).</li>\n<li>Experience supporting market data environments within a global organisation, including internally developed DMA feed handlers and distribution infrastructure.</li>\n<li>Strong understanding of market data concepts and functionality, including data models (fields/messages), protocols (e.g., snapshot + delta), order book representations (L1/L2/L3), recovery, and reliability.</li>\n<li>Hands-on Site Reliability Engineering or DevOps experience, including system administration, automation, measurement, and release/deployment management.</li>\n<li>Experience with monitoring, metrics, and command/control tooling for distributed market data platforms, with the ability to evaluate existing solutions and drive enhancements across development and operations.</li>\n<li>Ability to operate with a high level of thoroughness and attention to detail, demonstrating strong ownership of deliverables and production systems.</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6e7c226-e8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954905529","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["C++","Linux system internals","Networking","CPU architecture","Object-oriented programming","Cloud environments","Java","Python","Scripting","Market data systems","Site Reliability Engineering","DevOps","Monitoring","Metrics","Command/control tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:18.645Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, Linux system internals, Networking, CPU architecture, Object-oriented programming, Cloud environments, Java, Python, Scripting, Market data systems, Site Reliability Engineering, DevOps, Monitoring, Metrics, Command/control tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_326f90c8-11f"},"title":"Senior High Frequency C++ Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of our organisation, powering our lowest-latency solutions for systematic and high-frequency trading. We deliver the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>\n<p>As a Senior HFT Developer on SPEED, you will design and build core low-latency components for order entry, market data, exchange simulation, feature extraction, and strategy containers, initially focused on delivering the full set of capabilities required for trading and research infrastructure. You will collaborate closely with system architects and quantitative researchers, operate and optimise these systems in production, and have clear opportunities to grow into technical and team leadership as the effort scales.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build low-latency infrastructure for order entry, market data, exchange simulators, feature extraction, strategy container, and other systems.</li>\n<li>Build convenience layer tools and services to facilitate trading teams onboarding at MLP.</li>\n<li>Provide level 2 support for the systems in production.</li>\n<li>Work closely with the SPEED architect, quantitative researchers, and the business to provide high ROI solutions that are aligned with both the business and the platform strategy.</li>\n<li>Opportunities for growth in terms of leadership as effort expands.</li>\n<li>Will liaise with many other MLP teams depending on project focus.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>5+ years with a well-regarded HFT group, delivering production-grade, low-latency systems.</li>\n<li>Demonstrated expertise in C++ and Python for production, low-latency systems.</li>\n<li>Deep familiarity with low-level Systems: OS tuning, networking stack, user-space drivers, and kernel-bypass patterns.</li>\n<li>Strong understanding of the HFT quantitative research pipeline.</li>\n<li>Experience with HPC grids (scheduling, storage, job management) for research and production workloads.</li>\n<li>Cloud experience (AWS, GCP) is a plus.</li>\n<li>Proven ability to navigate large organisations, create cross-team synergies, and influence outcomes.</li>\n<li>High accountability and ownership; able to self-manage time, set priorities, and meet deadlines.</li>\n<li>Potential to provide technical leadership and manage a small team.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. We pay a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_326f90c8-11f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954694645","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["C++","Python","low-level Systems","OS tuning","networking stack","user-space drivers","kernel-bypass patterns","HFT quantitative research pipeline","HPC grids","scheduling","storage","job management"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:18.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, low-level Systems, OS tuning, networking stack, user-space drivers, kernel-bypass patterns, HFT quantitative research pipeline, HPC grids, scheduling, storage, job management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_107cbb3f-b6c"},"title":"Production Support Engineer","description":"<p>The Production Support Engineer role is a hands-on, business-facing position that requires understanding how applications support the business, investigating functional and data-related issues, and communicating clearly with users under pressure.</p>\n<p>The Core Technology Production Support team supports a suite of business-critical financial applications used by Middle Office, Operations, Treasury, and Trading. These platforms are central to the firm&#39;s PnL, risk, cash, trade processing, and regulatory reporting workflows.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>End to end ownership of the production environment</li>\n<li>Infrastructure management</li>\n<li>Release planning and deployment</li>\n<li>Incident and problem management, including root cause analysis</li>\n<li>Capacity Planning / BCP Testing</li>\n<li>Build strong relationships with development and end-users/clients</li>\n<li>Foster the DevOps culture</li>\n<li>Focus on client service and delivery</li>\n<li>Become the go-to person for your area of responsibility</li>\n<li>Build subject matter expertise</li>\n<li>Create and maintain high quality documentation and runbooks</li>\n<li>Cross train other Support team members</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Electrical Engineering, or a related field.</li>\n<li>Minimum 2+ years’ experience supporting an enterprise environment</li>\n<li>Must have previous experience supporting business facing applications</li>\n<li>Strong scripting skills in one of the following: Python (preferred), PowerShell, Perl, etc.</li>\n<li>Excellent SQL skills and knowledge of various database systems</li>\n<li>Must be able to run and understand complex queries</li>\n<li>Ability to support both Windows and Unix/Linux environments</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience working in a trading environment</li>\n<li>Exposure to the following:</li>\n</ul>\n<ul>\n<li>CI/CD (Jenkins/Octopus/Artifactory)</li>\n<li>Metrics/KPIs (Datadog/Influx/Tableau)</li>\n<li>Kafka</li>\n<li>Kubernetes</li>\n<li>AI (MCP/Agents)</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_107cbb3f-b6c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755943534669","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","PowerShell","Perl","SQL","Windows","Unix/Linux"],"x-skills-preferred":["CI/CD","Metrics/KPIs","Kafka","Kubernetes","AI"],"datePosted":"2026-04-18T22:13:14.556Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"Python, PowerShell, Perl, SQL, Windows, Unix/Linux, CI/CD, Metrics/KPIs, Kafka, Kubernetes, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7e58f60-5fa"},"title":"Software Engineer - Learning Engineering and Data (LEaD) Program","description":"<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>\n<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>\n<p>Candidate is expected to:</p>\n<ul>\n<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>\n<li>Take part in the development and enhancement of the backend distributed system</li>\n<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>2-5 years of experience working with C++, Python, or Java</li>\n<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>\n<li>Must be comfortable working in both Unix/Linux and Windows environments</li>\n<li>Good understanding of various design patterns</li>\n<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>\n<li>Solid communication skills</li>\n<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>\n<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n</ul>\n<p>Desirable Skills/Knowledge:</p>\n<ul>\n<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>\n<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>\n<li>Hands-on experience building ML and data pipeline architectures</li>\n<li>Understanding of distributed messaging systems</li>\n<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>\n<li>Experience with relational and non-relational database platforms</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7e58f60-5fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT LEad Program","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953879362","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","Python","Java","ML libraries","Pandas","NumPy","FastAPI","Boost","Spring Boot"],"x-skills-preferred":["Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field","Demonstrable passion for developing LLM-powered products","Hands-on experience building ML and data pipeline architectures","Understanding of distributed messaging systems","Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)","Experience with relational and non-relational database platforms"],"datePosted":"2026-04-18T22:13:11.242Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25fd58ed-3c0"},"title":"(Senior) Data Scientist (all genders)","description":"<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>\n<p>As a Senior Data Scientist, you will work on various topics such as rankings, recommendations, user segmentation, user lifetime value, business forecasts, etc. You will have access to our huge dataset and work in collaboration with stakeholders from various departments.</p>\n<p>Your objective is to build the best internal and external products for our customers. Holidu highly values a diverse and open environment with people from all over the world.</p>\n<p>This role is based in Munich with a hybrid setup.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Flexible data science environment (Python, Sagemaker)</li>\n<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>\n<li>Data Pipelines: Airflow, DBT.</li>\n<li>Data Visualization: Looker.</li>\n<li>Data Analytics: SQL, Python.</li>\n<li>Collaboration: Git.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>You will play a pivotal role in the Business Intelligence team alongside data scientists, analysts, and engineers. Together, you will lead the development and enhancement of our company-wide machine learning strategy.</p>\n<ul>\n<li>Collaborate across various business departments to identify opportunities and solve critical business challenges using data science solutions.</li>\n<li>Build and optimize predictive models such as booking cancellation forecasts, churn predictions, pricing optimization, revenue forecasting and marketing channel allocation.</li>\n<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>\n<li>Interface with diverse business stakeholders, ensuring alignment between data science initiatives and company goals.</li>\n<li>Demonstrate leadership in data science projects, leveraging your expertise to drive measurable business impact.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>3+ years of experience as a Data Scientist, with a proven track record of applying data science methodologies to solve complex business problems.</li>\n<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>\n<li>Expertise in statistics, predictive analytics, machine learning techniques, and proficiency in tools like Python and SQL.</li>\n<li>Experience with Airflow and dbt is a plus.</li>\n<li>Strong understanding of business operations and experience collaborating with diverse stakeholders.</li>\n<li>Enthusiasm for data science and a drive to deliver world-class products that make a difference.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25fd58ed-3c0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2555141","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","Sagemaker","AWS Stack","Airflow","DBT","Looker","SQL","Git"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:07.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Sagemaker, AWS Stack, Airflow, DBT, Looker, SQL, Git"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5dfa9c86-5c0"},"title":"Director, US Forecasting & Analytics– Vaccines & Immune Therapies","description":"<p>Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies Global Insights, Analytics &amp; Forecasting, BBU Hybrid Work- on average 3 days a week from office</p>\n<p>The Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies is a senior commercial insights leader responsible for US demand forecasting and analytics across the V&amp;I portfolio. The role is predominantly forecast-focused, serving as the US forecasting lead and strategic thought partner to Marketing, Finance, Market Access, and US teams.</p>\n<p>Responsibilities:</p>\n<p>US Forecasting Leadership (Core Accountability)</p>\n<ul>\n<li>Lead US short-term and long-term demand forecasts (TRx, NBRx, volume, patients, revenue) for V&amp;I assets using robust, patient-based and market-based models</li>\n<li>Own forecast methodology, assumptions, and governance, ensuring objectivity, transparency, and consistency with enterprise standards</li>\n<li>Integrate primary market research, epidemiology, competitive intelligence, access dynamics, and real-world data into forecast models</li>\n<li>Proactively identify and quantify key risks and opportunities through scenario and sensitivity analyses</li>\n<li>Partner closely with Finance, Market Access &amp; Pricing, Marketing, Sales, Medical, and Global Forecasting to ensure alignment on assumptions and implications</li>\n<li>Support business planning, governance reviews, and opportunity assessments with clear, executive-ready narratives</li>\n<li>Serve as a trusted advisor to senior marketing and finance leadership, clearly articulating forecast drivers and changes</li>\n</ul>\n<p>Analytics &amp; Resource Leadership (Enablement)</p>\n<ul>\n<li>Provide leadership over forecasting-adjacent analytics, ensuring advanced analytics and insights are embedded into forecasting and business planning</li>\n<li>Manage and prioritize internal analysts, contractors, and external vendors supporting forecasting and analytics deliverables</li>\n<li>Partner with data analytics resources, Global IA&amp;F, and GIBEX capability teams to deploy new tools, data sources, and modeling approaches</li>\n<li>Champion and identify new ways to embed AI and advanced automation into the practice of data analytics and forecasting to drive efficiency, scalability, and decision quality</li>\n<li>Champion continuous improvement in forecasting processes, AI-enabled modeling, and automation</li>\n<li>Contribute to the development and sharing of best practices across the V&amp;I forecasting community</li>\n</ul>\n<p>Essential for the role</p>\n<ul>\n<li>Bachelor’s degree in a quantitative, scientific, or business-related field required (e.g., Statistics, Economics, Mathematics, Engineering, Computer/Data Science).</li>\n<li>8+ years’ experience in US pharmaceutical commercial forecasting, including in-market and late-stage pipeline assets</li>\n<li>Hands-on model ownership experience (build, refresh, and performance tracking) across short- and long-term horizons</li>\n<li>Expertise in scenario-based forecasting, sensitivity analysis, and driver-based narratives to support senior decision-making</li>\n<li>Strong capability integrating multiple data types (e.g., IQVIA, claims, epidemiology, RWD/RWE, primary research) into coherent, decision-grade forecasts</li>\n<li>Working knowledge of advanced analytics/ML approaches (e.g., time series, causal inference, ensembles) and where they add value vs. traditional methods</li>\n<li>Fluency in modern analytics tooling and automation (e.g., Python/R/SQL, BI/visualization), with ability to partner effectively with data engineering and analytics teams</li>\n<li>Demonstrated forecast governance and model risk discipline (traceable assumptions, documentation, and clear explanations)</li>\n<li>Strong understanding of US market access and payer dynamics and how they impact demand (coverage, contracting, channel, policy)</li>\n<li>Exceptional communication: translates complex analysis into clear, executive-ready insights, options, and recommendations</li>\n<li>Strong commercial competence across key demand levers (positioning, adoption, competitive dynamics, lifecycle events)</li>\n</ul>\n<p>Desirable for the role</p>\n<ul>\n<li>Advanced degree preferred (e.g., MBA, MS, PhD in Statistics, Economics, Decision Sciences, Data Science, or related discipline).</li>\n<li>Vaccines and/or Rare Disease experience, including familiarity with immunization dynamics, patient-based forecasting, and lifecycle management in preventive or immune-mediated therapies</li>\n<li>Change leadership: builds adoption for new tools, processes, and ways of working across cross-functional stakeholders</li>\n<li>Product mindset for forecasting: defines user needs, success metrics, and a roadmap for portfolio forecasting capabilities</li>\n<li>Model lifecycle practices (e.g., reproducibility, versioning, monitoring/drift awareness); familiarity with MLOps concepts</li>\n</ul>\n<p>Office Working Requirements</p>\n<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That’s why we work, on average, a minimum of three days per week from the office. But that doesn’t mean we’re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.</p>\n<p>#LI-Hybrid</p>\n<p>Date Posted 10-Apr-2026 Closing Date 23-Apr-2026 Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5dfa9c86-5c0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Global Insights, Analytics & Forecasting - V&I","sameAs":"https://astrazeneca.eightfold.ai","logo":"https://logos.yubhub.co/astrazeneca.eightfold.ai.png"},"x-apply-url":"https://astrazeneca.eightfold.ai/careers/job/563877689756206","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["forecasting","analytics","model ownership","scenario-based forecasting","sensitivity analysis","driver-based narratives","advanced analytics","machine learning","Python","R","SQL","BI/visualization","data engineering","forecast governance","model risk discipline","US market access","payer dynamics","exceptional communication","commercial competence"],"x-skills-preferred":["vaccines","rare disease","change leadership","product mindset","model lifecycle practices","MLOps"],"datePosted":"2026-04-18T22:13:06.502Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Wilmington, Delaware, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Healthcare","skills":"forecasting, analytics, model ownership, scenario-based forecasting, sensitivity analysis, driver-based narratives, advanced analytics, machine learning, Python, R, SQL, BI/visualization, data engineering, forecast governance, model risk discipline, US market access, payer dynamics, exceptional communication, commercial competence, vaccines, rare disease, change leadership, product mindset, model lifecycle practices, MLOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80dbb0f6-e54"},"title":"Senior Security Engineer","description":"<p>We are seeking a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies. This role is suited for an experienced Windows Engineer with proven understanding in enterprise security and will focus on building toolsets and processes to support the Information Security Program (ISP).</p>\n<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm&#39;s information and computer systems.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Provide a high level of security consultancy and engineering support for Windows/Active Directory/Azure security solutions including analysis and development of Windows security solutions.</li>\n<li>Strong understanding of modern authentication protocols, e.g., OIDC / OAUTH 2.</li>\n<li>Contribute to the vision, strategy, and drive design and implementation for authentication platforms both on premises and in the cloud.</li>\n<li>Provide security consultancy and engineering support for SAML, OIDC and Kerberos authentication across different Identity providers, including analysis and development of SSO, PKI, and other authentication solutions.</li>\n<li>Able to demonstrate clear understanding of current risks and threats related to Identity Management at technical and managerial levels.</li>\n<li>Actively monitor new and emerging security and privacy related technologies, trends, issues, and solutions and assess their applicability to key business initiatives and strategies.</li>\n<li>Participate in Information Security Incident Response activities for the Firm&#39;s environment.</li>\n<li>Liaison with key stakeholders to create and enforce policy including Technology organization, Trading units, Legal, Internal Audit, and Compliance.</li>\n<li>Provide support to Security and other technical operations staff to ensure smooth turnover from Engineering to Production - and provide mentoring to junior level security professionals.</li>\n<li>Develop and maintain documentation of all Security products including specific tools, technologies, and processes.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>Bachelor&#39;s degree in computer science or engineering preferred.</li>\n<li>7 + years&#39; experience working in a technical role with a minimum of 2 + years&#39; experience focusing on information security in the financial industry (preferred).</li>\n<li>Excellent understanding and experience of engineering Microsoft security solutions – including desktop and server operating systems, EntraID, Active Directory, Group Policy, Desired Configuration State, DNS, Messaging.</li>\n<li>Ability to understand code in C#/.NET and / or Python and strong scripting experience in PowerShell.</li>\n<li>Experience managing IaaS, SaaS solutions and services using CI/CD pipelines. Jenkins, Terraform experience is a strong plus.</li>\n<li>Solid understanding of SAML, OIDC and Kerberos authentication and related technology controls and best practices.</li>\n<li>Experience with Office 365 security controls including usage of Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, and Microsoft Authenticator.</li>\n<li>Understanding and experience with implementing Data Loss Prevention (DLP) solutions, policies, and technologies.</li>\n<li>Understanding of Azure Information Protection (AIP) and its components, including labeling, classification, and encryption.</li>\n<li>Ability to develop and implement strategies to ensure compliance with data protection regulations, such as GDPR or HIPAA, utilizing DLP and AIP solutions.</li>\n<li>Strong knowledge and experience in a variety of security technologies including: EDR, SIEM, Vulnerability Management is a plus.</li>\n<li>Relevant security certification (CISSP, GCIA, CISM, etc.) and/or product certifications (PingFederate, Azure, Windows, AD etc.) a plus.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80dbb0f6-e54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755944784476","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["security technologies","tools","methodologies","Windows security solutions","OIDC / OAUTH 2","SAML","Kerberos authentication","Identity providers","SSO","PKI","EDR","SIEM","Vulnerability Management"],"x-skills-preferred":["C#/.NET","Python","PowerShell","Jenkins","Terraform","Azure Active Directory","Conditional Access","o365 logging APIs","Microsoft CAS","Microsoft Authenticator","Data Loss Prevention (DLP)","Azure Information Protection (AIP)"],"datePosted":"2026-04-18T22:12:55.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"security technologies, tools, methodologies, Windows security solutions, OIDC / OAUTH 2, SAML, Kerberos authentication, Identity providers, SSO, PKI, EDR, SIEM, Vulnerability Management, C#/.NET, Python, PowerShell, Jenkins, Terraform, Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, Microsoft Authenticator, Data Loss Prevention (DLP), Azure Information Protection (AIP)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8610ea3d-93b"},"title":"Cloud Platform Engineer","description":"<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>\n<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>\n<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>\n<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>\n<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>\n<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>\n<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>\n<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>\n<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>\n<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>\n<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>\n<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>\n<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>\n<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>\n<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>\n<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>\n<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>\n<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8610ea3d-93b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955139979","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["AWS","Python","C#","DevOps","Infrastructure as Code","Cloud Security","SQL","Database Concepts","Networking"],"x-skills-preferred":["Airflow","Spark","Kafka","Databricks","Snowflake","Flink","Capital Markets"],"datePosted":"2026-04-18T22:12:50.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_78c099b8-238"},"title":"PnL Attribution Analytics - Fixed Income","description":"<p>We are seeking a skilled PnL Attribution Analyst to join our Operations &amp; Middle Office team in Bangalore. As a PnL Attribution Analyst, you will be responsible for reviewing, adjusting, and signing off daily firmwide PnL attribution reports, ensuring completeness, accuracy, and consistency across portfolios.</p>\n<p>Your primary responsibilities will include:</p>\n<ul>\n<li>Preparing performance attribution reports for senior management and portfolio managers, highlighting primary PnL drivers and providing ad-hoc deep-dive analysis as required.</li>\n<li>Investigating and explaining material PnL moves on a Trade Date and T+1 basis, acting as a key point of contact for traders, risk, and finance on all PnL-related queries.</li>\n<li>Developing systematic controls to validate and enhance PnL attribution processes, including automated reconciliations, threshold-based alerts, and exception reporting.</li>\n</ul>\n<p>In addition, you will be responsible for monitoring and validating real-time and end-of-day pricing for all fixed income instruments across Rates, Credit, and FX, including derivatives and structured products.</p>\n<p>You will also maintain a strong working knowledge of Greeks-based risk sensitivities and their application to PnL attribution across fixed income derivatives, collaborate with quants and risk teams to ensure risk factor decompositions used in PnL attribution are accurate and aligned with the firm&#39;s pricing and risk models, and support the testing and validation of new pricing models and their impact on PnL and risk reporting.</p>\n<p>To succeed in this role, you will need to have an advanced degree in a quantitative discipline such as Engineering, Mathematics, Physics, Financial Engineering, or a related field, experience in PnL attribution, derivatives pricing/valuations, quantitative risk, or a closely related function within a front-office, risk, or portfolio analytics environment, and knowledge of fixed income products and their risk profiles across Rates, Credit, and FX, including derivatives, structured products, and asset-backed securities.</p>\n<p>You will also need to have solid coding skills in Python, with the ability to work efficiently with large datasets, build automation, and develop analytical tools, and excellent communication skills, with the ability to interact effectively with portfolio managers, quants, risk, and technology teams across the firm.</p>\n<p>If you are a collaborative team player with a strong willingness to support others, adapt quickly, and thrive in a fast-moving, high-pressure environment, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_78c099b8-238","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955631131","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PnL attribution","derivatives pricing/valuations","quantitative risk","Python","fixed income products","Greeks-based risk sensitivities"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:45.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, Karnataka, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"PnL attribution, derivatives pricing/valuations, quantitative risk, Python, fixed income products, Greeks-based risk sensitivities"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9ca997fb-218"},"title":"Quantitative Developer","description":"<p>We are building a world-class systematic data platform that will power the next generation of our systematic portfolio engines.</p>\n<p>The systematic data group is looking for a Quantitative Developer to join our growing team. The team consists of content specialists, data scientists, engineers, and quant developers who are responsible for discovering, maintaining, and analysing sources of alpha for our portfolio managers.</p>\n<p>The role builds on individual&#39;s knowledge and skills in four key areas of quantitative investing: data, statistics, technology, and financial markets.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Use finance knowledge and statistical knowledge to analyse potential alpha sources and present findings to portfolio managers and quantitative analysts.</li>\n<li>Build quant tools to help portfolio managers research, evaluate, combine alphas, and understand risks.</li>\n<li>Design and maintain tools to evaluate and monitor data quality and integrity for a wide variety of data sources.</li>\n<li>Engage with vendors, brokers, and perform analytics to understand characteristics of datasets.</li>\n<li>Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximise their profitability.</li>\n</ul>\n<p>Skills Required:</p>\n<ul>\n<li>3+ years of work experience as a financial engineer, data scientist, or quant developer.</li>\n<li>Strong knowledge of Python and/or C++, Java, C#.</li>\n<li>Familiarity with data pipeline engineering, ETL for large datasets, and scheduling tools like Airflow.</li>\n<li>Strong SQL and database experience including PL-SQL or T-SQL.</li>\n<li>Understanding of typical software development lifecycle and familiarity with: Linux, GitHub, CI/CD.</li>\n<li>Ph.D. or Masters in computer science, mathematics, statistics, or other field requiring quantitative analysis.</li>\n</ul>\n<p>Beneficial Skills and Experience:</p>\n<ul>\n<li>Understanding of risk models and performance attribution.</li>\n<li>Experience with financial markets such as equities and futures.</li>\n<li>Knowledge of statistical techniques and their usage.</li>\n</ul>\n<p>The estimated base salary range for this position is $165,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9ca997fb-218","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755952876477","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$165,000 to $250,000","x-skills-required":["Python","C++","Java","C#","data pipeline engineering","ETL","Airflow","SQL","database","Linux","GitHub","CI/CD","Ph.D.","Masters"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:44.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, C++, Java, C#, data pipeline engineering, ETL, Airflow, SQL, database, Linux, GitHub, CI/CD, Ph.D., Masters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4db63d33-c03"},"title":"Senior Technical Business Analyst - Algorithmic Execution","description":"<p>We are seeking a Senior Technical Business Analyst to join our team in New York. As a key member of our Execution Services and Central Liquidity Strategies teams, you will work closely with Portfolio Managers to build next-generation algorithmic solutions.</p>\n<p>In this role, you will play a critical role at the intersection of technology and trading, delivering order routing and internal liquidity products that optimize execution and drive savings for the firm.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Work with Execution Services and other internal stakeholders to gather and synthesise requirements across our algorithmic trading strategies.</li>\n</ul>\n<ul>\n<li>Drive the strategic expansion of the internal algo platform into new markets and asset classes.</li>\n</ul>\n<ul>\n<li>Analyse upstream/downstream data dependencies and create the necessary documented requirements, JIRAs, and project plans for core execution components.</li>\n</ul>\n<ul>\n<li>Create user guides and technical documentation to support the onboarding of new PMs and desks across our platforms.</li>\n</ul>\n<ul>\n<li>Create and maintain product roadmaps and artefacts required to manage stakeholder expectations across Trading and Technology.</li>\n</ul>\n<ul>\n<li>Manage day-to-day project deliverables; highlight, escalate, and resolve issues, conflicts, and roadblocks in a fast-paced trading environment.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>10 years of experience as a Technical Business Analyst or Project Manager in an enterprise-level FinTech environment.</li>\n</ul>\n<ul>\n<li>5+ years of relevant trading technology experience, ideally on the buyside, and comfortable interacting with front-office, non-technical personnel.</li>\n</ul>\n<ul>\n<li>Subject matter expertise in Electronic Execution and Market Microstructure (Equities required, Futures highly preferred).</li>\n</ul>\n<ul>\n<li>Impactful individual contributor: Be able to lead a wide range of projects front to back.</li>\n</ul>\n<ul>\n<li>Technical skills: Self-sufficient with SQL for trade data analysis and troubleshooting. Experience with KDB/Q and/or Python for data analysis preferred. Experience with sequencer-based platforms is also desired.</li>\n</ul>\n<ul>\n<li>Communication: Strong communication skills and the ability to work effectively in a team environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4db63d33-c03","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755938267676","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["SQL","KDB/Q","Python","Electronic Execution","Market Microstructure"],"x-skills-preferred":["Sequencer-based platforms"],"datePosted":"2026-04-18T22:12:44.525Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"SQL, KDB/Q, Python, Electronic Execution, Market Microstructure, Sequencer-based platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1963e2d1-add"},"title":"Cloud DevOps Engineer","description":"<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>\n<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>\n<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>\n<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>\n<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>\n<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Advanced degree in computer science or any other scientific field</li>\n<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>\n<li>AWS Cloud infrastructure design, implementation, and support</li>\n<li>Experience with multiple AWS services</li>\n<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>\n<li>Knowledge of Python (Flask/FastAPI/Django)</li>\n<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>\n<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>\n<li>Strong knowledge of Unix or Linux</li>\n<li>Strong communication skills to collaborate with various stakeholders</li>\n<li>Able to work independently in a fast-paced environment</li>\n<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n<li>Experience working in a production environment</li>\n<li>Some experience with relational and non-relational databases</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>\n<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1963e2d1-add","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955154859","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD","AWS Cloud infrastructure design, implementation, and support","Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation","Python (Flask/FastAPI/Django)","Containerization for applications and their subsequent orchestration within Kubernetes environments"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:31.979Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aa5f286d-ad4"},"title":"Senior Genome Editing Digital Pipeline Scientist","description":"<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Pipeline Scientist to drive the data vision that powers next-generation gene-edited products. As a Data Strategy &amp; Pipeline Leader in Gene Editing, you will coordinate a holistic data strategy across the editing pipeline so that diverse genomic and biological datasets are connected, accessible, and ready for advanced analytics. You will work closely with multi-functional teams to ensure that data, models, and decision tools are seamlessly integrated into product development workflows, enabling faster, more informed decisions and impactful innovation in gene-edited germplasm.</p>\n<p>Your primary responsibilities will include providing leadership to define and coordinate the data strategy that enables data-driven, model-based analytics for improved gene-edited germplasm, including accelerating data connectivity across the editing pipeline with multi-functional teams. You will also lead cross-functional projects with partners across Crop Science to automate decision making and connect data assets that accelerate development of gene-edited products.</p>\n<p>In addition, you will translate complex business data knowledge, scientific workflows, and product needs into clear technical implementation plans that can be executed by data scientists, data engineers, and developers. You will design and guide the development of robust data systems and analytics pipelines that support a wide variety of genomic and computational biology use cases and can scale with future business needs.</p>\n<p>As a key communicator and integrator between scientific, technical, and business stakeholders, you will align roadmaps, prioritize initiatives, and ensure that data and analytics solutions deliver measurable value. You will also attract, mentor, and develop talent, serving as a coach for peers and colleagues in key areas of expertise to support their professional growth and build a strong data and analytics community.</p>\n<p>Finally, you will champion and support Health, Safety &amp; Environment, Compliance, Business Conduct, and Human Rights policies and culture in all activities and collaborations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aa5f286d-ad4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976715204","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,400.00 - $171,600.00","x-skills-required":["PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or a related scientific field","Minimum of 6 years of relevant experience, or MS with 10+ years of experience","Experience in the analysis of large biological datasets and in developing analytical pipelines using Python, R, or similar software and programming languages","Ability to design and implement data systems and analytical pipelines that can support a broad range of scientific and business use cases","Strong collaboration skills, demonstrated through building cross-functional partnerships and influencing others to drive results and solve complex business problems"],"x-skills-preferred":["Strong understanding of the genomic control of physiological and biochemical pathways in plants or animals","Experience developing data systems and analytical pipelines that leverage genome-wide association (GWA) data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, and pedigree data"],"datePosted":"2026-04-18T22:12:21.373Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Life Sciences","skills":"PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or a related scientific field, Minimum of 6 years of relevant experience, or MS with 10+ years of experience, Experience in the analysis of large biological datasets and in developing analytical pipelines using Python, R, or similar software and programming languages, Ability to design and implement data systems and analytical pipelines that can support a broad range of scientific and business use cases, Strong collaboration skills, demonstrated through building cross-functional partnerships and influencing others to drive results and solve complex business problems, Strong understanding of the genomic control of physiological and biochemical pathways in plants or animals, Experience developing data systems and analytical pipelines that leverage genome-wide association (GWA) data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, and pedigree data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5def5eb0-d2e"},"title":"Portfolio Pricing and Valuations Analyst","description":"<p>We are seeking a skilled and detail-oriented professional to join our team as a Portfolio Pricing and Valuations Analyst. The successful candidate will play a critical role in maintaining and enhancing the firm&#39;s pricing and valuation infrastructure for equity derivative products, with a focus on volatility, vanilla, and exotic products.</p>\n<p>The candidate will be a key contributor in the review and validation of pricing models and methodologies, the analysis and explanation of P&amp;L, and the onboarding of new and complex products.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Valuations , Oversee and enhance the configuration of internal systems and calibration of pricing models for equity derivative products. Source, validate, and analyze market data, including external volatility surfaces and curves, to ensure accuracy and integrity of marks.</li>\n</ul>\n<ul>\n<li>Pricing &amp; Mark Sign-Off , Maintain intraday and end-of-day pricing procedures and controls. Responsible for publishing and signing off on the firm&#39;s official end-of-day marks, surfaces, and curves across equity volatility and exotic products.</li>\n</ul>\n<ul>\n<li>P&amp;L Explanation &amp; Attribution , Analyze and explain daily P&amp;L, decomposing performance into its key drivers: Greeks-based attribution, idiosyncratic events, and trading activity. Identify, investigate, and resolve breaks in collaboration with trading, risk, and finance. Prepare attribution reports for senior management and run ad-hoc analysis when required.</li>\n</ul>\n<ul>\n<li>Methodology Review &amp; Testing , Evaluate and test pricing and marking methodologies proposed by quant or vendor teams. Provide practitioner-level feedback on model assumptions, calibration approaches, and operational applicability.</li>\n</ul>\n<ul>\n<li>New Product Onboarding , Oversee the setup and integration of new equity derivative products into the firm&#39;s infrastructure, coordinating across technology, quant, risk, and portfolio management.</li>\n</ul>\n<ul>\n<li>Primary Interface , Act as a senior point of contact for portfolio managers on pricing, valuation, risk, and product setup matters. Coordinate across departments to resolve issues efficiently.</li>\n</ul>\n<p>Qualifications/Skills:</p>\n<ul>\n<li>5+ years of professional experience in a relevant role such as equity derivatives trading, trader assistant, risk management, product control, or valuations</li>\n</ul>\n<ul>\n<li>Advanced degree in a quantitative discipline preferred</li>\n</ul>\n<ul>\n<li>Deep knowledge of Equity Derivative products including vanilla options, variance/volatility swaps, TRF/TRS, dividend swaps, and exotic products , with particular emphasis on volatility products</li>\n</ul>\n<ul>\n<li>Strong familiarity with P&amp;L explanation and attribution in an equity derivatives context</li>\n</ul>\n<ul>\n<li>Demonstrated ability to understand, evaluate, and test complex pricing methodologies</li>\n</ul>\n<ul>\n<li>Programming experience (Python, VBA, etc) needed with a focus on data analysis.</li>\n</ul>\n<ul>\n<li>Proficiency with Bloomberg and Reuters and other market data sources</li>\n</ul>\n<ul>\n<li>Experience engaging and collaborating with technology and quant teams to drive system enhancements</li>\n</ul>\n<ul>\n<li>Highly detail-oriented with strong ownership, sound judgment, and the ability to prioritize in a high-pressure environment</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future. When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5def5eb0-d2e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Middle Office","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955504767","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $250,000","x-skills-required":["equity derivatives trading","risk management","product control","valuations","programming experience (Python, VBA, etc)","proficiency with Bloomberg and Reuters and other market data sources","experience engaging and collaborating with technology and quant teams"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:17.122Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"equity derivatives trading, risk management, product control, valuations, programming experience (Python, VBA, etc), proficiency with Bloomberg and Reuters and other market data sources, experience engaging and collaborating with technology and quant teams","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_528bf454-d13"},"title":"Data Analytics Engineer","description":"<p>We are seeking a Senior Analytics Engineer to join our team. As a key member of our data organization, you will be responsible for transforming raw data into a strategic asset by designing high-performance data models that power our financial reporting, product forecasting, and GTM strategy.</p>\n<p>Your 12-Month Journey</p>\n<p>During the first 3 months, you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt), core business data models, and understand the current pain points in our data flow. You will deliver and optimize your first high-priority models for product usage and financial reporting. You will partner with the Data Engineer to align on the new infrastructure roadmap.</p>\n<p>Within 6 months, you will implement a robust semantic layer to standardize KPIs across the company and enable AI-readiness and advanced natural language querying.</p>\n<p>After 1 year, you will fully own the company&#39;s data modeling architecture, ensuring it is prepared for AI and machine learning applications. You will act as a strategic advisor to department heads, using data to help shape the company&#39;s long-term growth and forecasting strategies.</p>\n<p>What You&#39;ll Be Doing</p>\n<p>Strategic Data Product Ownership: Manage the end-to-end lifecycle of our internal data products. You will partner with stakeholders to translate complex business questions into technical requirements, selecting the right tools to ensure our reporting is scalable, accessible, and high-impact.</p>\n<p>Advanced Analytics Engineering: Design, build, and maintain our core data models using dbt Labs. You will own the logic for mission-critical datasets, including financial reporting, churn forecasting, and reverse-ETL flows that sync warehouse data back into our business tools (e.g., Planhat, HubSpot).</p>\n<p>Data Governance &amp; Semantic Layering: Act as the guardian of &#39;The Truth.&#39; You will implement data governance standards and build our semantic layer to ensure metrics are consistent across the company.</p>\n<p>Data Democratization &amp; Enablement: In collaboration with RevOps, you will design and deliver training programs and documentation. Your goal is to empower users across Finance, Product, and GTM to independently navigate data products and derive their own insights.</p>\n<p>Collaboration: You will be the central hub of our data organization. You will work daily with the Data Engineer to align on the roadmap, while frequently consulting with Finance, GTM, and Product leaders to ensure our data products solve their most pressing problems.</p>\n<p>What You Bring</p>\n<p>Solid experience in Analytics Engineering, Data Analysis, or Data Engineering, with a track record of independently delivering data products that enable reporting, decision-making, and CDP use cases.</p>\n<p>You are an expert in SQL and understand how to write performant, modular code. Familiarity with Python and Git for optimizing and versioning data transformations is a significant advantage.</p>\n<p>Deep, hands-on experience with dbt and BigQuery is a must. You should also be comfortable navigating ELT tools like Airbyte or Fivetran.</p>\n<p>Commercially savvy: you understand the business. You can spot opportunities where data can improve ARR, reduce churn, or optimize spend.</p>\n<p>You thrive in fast-paced environments and are comfortable creating structure out of the uncertainty of a scaling company.</p>\n<p>Strong project management and stakeholder management skills. You are a &#39;bilingual&#39; communicator who can discuss warehouse schemas with an engineer and ARR growth with a CFO.</p>\n<p>Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer</p>\n<p>Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam</p>\n<p>A chance to be part of and shape one of the most ambitious scale-ups in Europe</p>\n<p>Work in a diverse and multicultural team</p>\n<p>€1,500 annual training budget plus internal training</p>\n<p>Pension plan, travel reimbursement, and wellness perks</p>\n<p>28 paid holiday days + 2 additional days to relax in 2026</p>\n<p>Work from anywhere for 4 weeks/year</p>\n<p>An inclusive and international work environment with a whole lot of fun thrown in!</p>\n<p>Apple MacBook and tools</p>\n<p>€200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_528bf454-d13","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-analytics-engineer","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["SQL","dbt","BigQuery","Airbyte","Python","Git","ELT tools","Data governance","Semantic layering","Data democratization","Enablement"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:13.210Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, dbt, BigQuery, Airbyte, Python, Git, ELT tools, Data governance, Semantic layering, Data democratization, Enablement","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21f5f6c3-734"},"title":"Data Engineer","description":"<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>\n<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>\n<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>\n<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>\n<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>\n<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>\n<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>\n<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>\n<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>\n<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>\n<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>\n<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21f5f6c3-734","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-engineer","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["Data Engineering","Cloud environments","dbt","Airbyte/Fivetran","BigQuery","GCP ecosystem","Infrastructure-as-Code","Terraform","Airflow","Dagster","Python","SQL","CI/CD best practices","DevOps practices"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:06.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e8aabc91-c80"},"title":"Assistant Manager of Data Analytics","description":"<p>We are seeking an experienced professional to join our team in Shanghai. As Assistant Manager of Data Analytics, you will focus on using data and analytics to drive business activities and outcomes that improve or transform customer strategy, customer segmentation, predictive models, and marketing campaigns.</p>\n<p>Principal Responsibilities: The role holder will conduct customer strategy analysis focusing on acquisition, activation, retention, conversion, and LTV, and deliver actionable insights. Build and maintain customer segmentation frameworks to support targeted and personalized marketing and operations. Leverage advanced data analytics tools and methodologies to develop, validate, and optimize predictive models, contributing to generate high-quality leads. Analyze customer journey, conversion funnels, and drop-off points to identify bottlenecks and recommend experience improvements. Evaluate the performance of marketing campaigns, membership programs, loyalty initiatives, and promotional strategies by measuring ROI, conversion rate, and engagement metrics. Partner with product, marketing, operations, and customer teams to translate data insights into executable strategies and drive business decisions. Support the business team&#39;s campaign needs, including RM lead generation and manual SMS outreach. Develop and maintain customer-focused dashboards, KPIs, and reporting systems.</p>\n<p>To be successful in the role, you should meet the following requirements: Minimum of 5 years&#39; experience in one or multiple skills in data/business analytics in the financial or digital domains. Demonstrated experience in process and analysis of large amounts of data using one of these: Python, R, SQL, or SAS; on environments such as AWS, Google Cloud, or Hadoop. Knowledge and experience in AI, big data, machine learning, or predictive algorithms, statistics modeling, and data mining. Excellent communication and teamwork skills, able to collaborate effectively with different departments and stakeholders. Strong problem-solving skills and innovative thinking, able to translate complex business problems into data analytics solutions. Proven experience in one or more of: customer segmentation, digital marketing, data science, portfolio analytics, use of open-source data in analyses. Good English communication skills, able to collaborate effectively with domestic and international teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e8aabc91-c80","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC International Wealth and Premier Banking","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610677890","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","R","SQL","SAS","AWS","Google Cloud","Hadoop","AI","big data","machine learning","predictive algorithms","statistics modeling","data mining"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:33.642Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Shanghai"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Python, R, SQL, SAS, AWS, Google Cloud, Hadoop, AI, big data, machine learning, predictive algorithms, statistics modeling, data mining"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62c461dc-a98"},"title":"Lead Cloud Engineer","description":"<p>For Digital Hub Warsaw, we&#39;re looking for a Lead Cloud Engineer to join our team. As a visionary company, we&#39;re driven to solve the world&#39;s toughest challenges and strive for a world where &#39;Health for all, Hunger for none&#39; is no longer a dream, but a real possibility.</p>\n<p>We&#39;re building an enterprise-grade Infrastructure Operations Platform named VOPs to support the facilitation of most complex IT infrastructure operations for all IT teams at Bayer globally. Your responsibilities will include:</p>\n<p>Planning and Design: Join the team responsible for planning and running our VOPs platform. Leadership: Mentor a team of engineers, providing guidance and support in the implementation of cloud solutions. Collaboration with Stakeholders: Work closely with Squad Leads and other stakeholders to understand requirements and align integration strategies with business goals. Technical Oversight: Ensure that solutions are scalable, reliable, maintainable, and secure, adhering to best practices in IT architecture and in-line with Bayer&#39;s strategy. Documentation and Standards: Create, maintain, and review comprehensive documentation for processes, standards, and best practices. Intercultural Communication: Foster an environment of open communication and collaboration among diverse teams across different geographical locations.</p>\n<p>Our requirements include: Degree in Computer Science, Information Technology, or related field, or equivalent practical experience as an IT engineer. At least 6 years of experience in Azure (other clouds will be a plus). Proficiency in IT Architecture &amp; design, specifically in infrastructure automation, provisioning, and maintenance. Strong analytical skills with the ability to troubleshoot and resolve technical issues effectively, even under pressure. Familiarity with IaC (e.g., Terraform) and strong proficiency in Python. Linux command line tools and shell scripting. Experience with building IT systems in regulated environments. Integration and Automation Expertise: Knowledge of CI/CD processes and experience in building and deploying integration solutions (Azure DevOps, GitHub Repos, and GitHub Actions). Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Experience with API management and/or design will be appreciated. Intercultural Competence: Ability to work collaboratively in a multicultural environment, respecting diverse perspectives and fostering teamwork, establishing and maintaining a robust professional network. Language Proficiency: Fluent in English, both spoken and written.</p>\n<p>What we offer includes: A flexible, hybrid work model. Great workplace in a new modern office in Warsaw. Career development, 360° Feedback &amp; Mentoring programme. Wide access to professional development tools, trainings, &amp; conferences. Company Bonus &amp; Reward Structure. VIP Medical Care Package (including Dental &amp; Mental health). Holiday allowance (&#39;Wczasy pod gruszą&#39;). Life &amp; Travel Insurance. Pension plan. Co-financed sport card. FitProfit. Meals Subsidy in Office. Additional days off. Budget for Home Office Setup &amp; Maintenance. Access to Company Game Room equipped with table tennis, soccer table, Sony PlayStation 5, and Xbox Series X consoles setup with premium game passes, and massage chairs. Tailored-made support in relocation to Warsaw when needed.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62c461dc-a98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949973780545","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Azure","IT Architecture & design","Infrastructure automation","Provisioning","Maintenance","IaC (Terraform)","Python","Linux command line tools","Shell scripting","CI/CD processes","Azure DevOps","GitHub Repos","GitHub Actions","API management","API design"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:27.474Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Warsaw"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Azure, IT Architecture & design, Infrastructure automation, Provisioning, Maintenance, IaC (Terraform), Python, Linux command line tools, Shell scripting, CI/CD processes, Azure DevOps, GitHub Repos, GitHub Actions, API management, API design"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_efa9d52e-e4e"},"title":"Consultant Specialist","description":"<p>Join HSBC and discover how valued you&#39;ll be in a career where you can make a real impression. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.</p>\n<p>As a Consultant Specialist, you will be responsible for understanding release cycles within GPE and aligned testing associated with it. You will also manage the release deployment on various environments, manage automation initiatives required for the stable functioning of the environments like healthchecks / monitoring, triage and escalate High priority Incidents to get focused resolutions, reduce overall downtime for end-to-end testing by identifying opportunities in environment upgrades, communicate with regional/country project teams, technical leads and Asset teams, drive root cause analysis with involved teams, analyse issues monthly and run continuous improvement cycles for incidents, raise engagement with partner application / other teams to facilitate the issue resolution, engage and drive resolution where necessary; aiding support to SME&#39;s from application teams to facilitate resolution, report updates for ongoing issues to stakeholders, including exec level management.</p>\n<p>You will also be responsible for infrastructure management activities which comprise of critical vulnerability fixing, OS/DB/MQ patching, certificate renewals etc. Additionally, you will lead and mentor team to achieve above responsibilities successfully.</p>\n<p>Knowledge &amp; Experience / Qualifications:</p>\n<ul>\n<li>Must have strong UNIX and shell scripting experience.</li>\n<li>Basic knowledge on middleware products like WAS/MQ.</li>\n<li>Experience of DevOps tools like Jenkins, GITHUB etc.</li>\n<li>Experience of Control-M, Connect Direct (C:D).</li>\n<li>Experience of deployment / change pipelines like CI / CD.</li>\n<li>Flexible to work in shifts, on weekends, after work hours and on call support as per the need of the project.</li>\n<li>Good understanding of the Payment schemes and e2e flows for US scheme payments.</li>\n<li>Strong communication skills (verbal, written, and presentation of complex information and data).</li>\n<li>Stakeholder Management and working in dynamic environment</li>\n</ul>\n<p>Time management - Ability to prioritize project criticality based on requirements and business needs. Strong analytical skills supported by good decision making and problem solving skills &amp; attitude. Ability to work independently with a hands-on approach. Good project management skills. Knowledge of programming language such Java or Python. Knowledge Automation tools. Knowledge of multiple clearing systems. Hands on experience of CI/D pipelines /51/LP/WX</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_efa9d52e-e4e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC Software Development (GuangDong) Limited","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610678275","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["UNIX","shell scripting","middleware products","DevOps tools","Control-M","Connect Direct","deployment / change pipelines","CI / CD","Payment schemes","e2e flows","US scheme payments","communication skills","stakeholder Management","dynamic environment","project criticality","analytical skills","decision making","problem solving skills","hands-on approach","project management skills","programming language","Java","Python","Automation tools","multiple clearing systems","CI/D pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:25.401Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Guangzhou"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"UNIX, shell scripting, middleware products, DevOps tools, Control-M, Connect Direct, deployment / change pipelines, CI / CD, Payment schemes, e2e flows, US scheme payments, communication skills, stakeholder Management, dynamic environment, project criticality, analytical skills, decision making, problem solving skills, hands-on approach, project management skills, programming language, Java, Python, Automation tools, multiple clearing systems, CI/D pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2aae11e-f20"},"title":"Sr Genome Editing Operations Scientist","description":"<p>As a Genome Editing Operations Scientist at Bayer Crop Science, you will guide the development of an increasingly efficient gene editing pipeline by building connected data systems that drive decisions. You will connect disparate data sources and leverage key advancement data to group projects, reagents, and samples, using this connected data system to deliver models that optimize resource use and pipeline capacity by integrating data awareness across lab, greenhouse, and field operations.</p>\n<p>Your primary responsibilities will be to:</p>\n<ul>\n<li>Guide the development of highly connected data systems that enable data-driven, model-based analytics to improve pipeline effectiveness and efficiency;</li>\n<li>Work with multifunctional teams to enable data connectivity across the editing pipeline, integrating information from lab, greenhouse, and field operations;</li>\n<li>Collaborate with partner teams across Crop Science (Gene Editing, IT Enterprise, Data and Engineering) to automate decision making and improve operational efficiency to accelerate development of gene-edited products;</li>\n<li>Serve as a key communicator translating business data knowledge and operational workflows into clear technical implementation plans for data scientists, data engineers, and developers;</li>\n<li>Demonstrate autonomy in building relationships and networks within your unit and across functions, most often with members of the Crop Genome Editing team and closely aligned partner teams;</li>\n<li>Act as a consultant to leadership and colleagues on digital strategy and data-driven operations through clear, organized, and influential communication;</li>\n<li>Actively build your own acumen in biology, genome design, and digital operations while sharing best practices and learnings with the broader Biology and Genome Design community.</li>\n</ul>\n<p>We seek an incumbent who possesses the following qualifications:</p>\n<ul>\n<li>PhD in Computational Biology, Computer Science and Engineering, or another relevant scientific field with a minimum of 6 years of relevant experience, or MS with 10+ years of relevant experience;</li>\n<li>Demonstrated track record developing data systems and pipelines that enable efficient product delivery and operational modeling;</li>\n<li>Demonstrated experience working collaboratively in cross-functional and cross-cultural teams to achieve common goals;</li>\n<li>Demonstrated experience leading and influencing activities of cross-functional teams without direct reporting relationships;</li>\n<li>Ability to lead and influence key stakeholders through challenges and opportunities and to facilitate solutions.</li>\n</ul>\n<p>Preferred qualifications include experience building data pipelines as a ML DevOps Engineer or Data Engineer, experience with Operations Research, and experience analyzing large biological datasets and developing analytical pipelines using Python, R, or similar software and languages.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2aae11e-f20","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer Crop Science","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976597728","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,400.00 - $171,600.00","x-skills-required":["Computational Biology","Computer Science and Engineering","Data Systems","Pipeline Development","Collaboration","Communication","Digital Strategy","Data-Driven Operations"],"x-skills-preferred":["ML DevOps Engineer","Data Engineer","Operations Research","Python","R","Cloud Development Environments"],"datePosted":"2026-04-18T22:11:11.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chesterfield"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"Computational Biology, Computer Science and Engineering, Data Systems, Pipeline Development, Collaboration, Communication, Digital Strategy, Data-Driven Operations, ML DevOps Engineer, Data Engineer, Operations Research, Python, R, Cloud Development Environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d7fadcc-6fa"},"title":"Data Scientist Computer Vision","description":"<p>At Bayer, we&#39;re seeking a talented Data Scientist with deep learning and machine learning expertise focused on image-based data to help shape the future of agriculture. In this role, you&#39;ll join a dynamic team that supports the development of Bayer Crop Science next-generation products by applying computer vision to automate critical processes across the Plant Biotechnology organisation.</p>\n<p>The primary responsibilities of this role are to:</p>\n<p>Solve real agricultural problems using deep learning and AI across image and other data modalities, translating complex models into tangible business and scientific impact.</p>\n<p>Design and implement end-to-end machine learning pipelines for computer vision use cases, including segmentation, classification, detection, and multi-task learning.</p>\n<p>Prototype, evaluate, and iterate on cutting-edge architectures such as CNNs, Vision Transformers, foundational and large-scale vision models, ensuring state-of-the-art performance.</p>\n<p>Optimize models for accuracy, robustness, and inference efficiency, including experimentation with hyperparameters, compression, and deployment-oriented optimisations.</p>\n<p>Independently build scalable data pipelines for training, validation, and evaluation, including data ingestion, augmentation strategies, and active learning loops.</p>\n<p>Collaborate cross-functionally with product, data, and software engineering teams to integrate models into production systems and deliver reliable, maintainable solutions.</p>\n<p>Contribute to MLOps practices, including model versioning, deployment, monitoring, and retraining workflows using modern tooling and cloud-based platforms.</p>\n<p>Build strong cross-functional relationships and actively engage with the broader Data Science Community to share best practices, align on standards, and co-create innovative solutions.</p>\n<p>Present clear, compelling, and validated stories about experiments, results, and recommendations to peers, senior management, and internal customers to drive strategic and operational decisions.</p>\n<p>We seek an incumbent who possesses the following:</p>\n<p>M.S. with 2+ years of experience or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning or computer vision.</p>\n<p>Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.</p>\n<p>Hands-on experience with modern computer vision architectures including models such as ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, and Vision Transformers.</p>\n<p>Strong background in handling large-scale datasets and creating custom datasets, for example using frameworks such as Hugging Face Datasets.</p>\n<p>Solid understanding of core machine learning concepts including loss functions, regularization, optimisation, and learning rate scheduling.</p>\n<p>Experience developing and deploying models using cloud-based ML platforms such as AWS SageMaker.</p>\n<p>Familiarity with Unix environments, including bash, file systems, and core utilities.</p>\n<p>Strong engineering practices including use of Git, Docker, CI/CD pipelines, modular codebase design, and unit testing.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d7fadcc-6fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976908666","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$109,370.40 - $164,055.60","x-skills-required":["Python","PyTorch","TensorFlow","ResNet","UNet","DeepLab","YOLO","SegFormer","SAM","Vision Transformers","Hugging Face Datasets","AWS SageMaker","Git","Docker","CI/CD pipelines","modular codebase design","unit testing"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:10.602Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"Python, PyTorch, TensorFlow, ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, Vision Transformers, Hugging Face Datasets, AWS SageMaker, Git, Docker, CI/CD pipelines, modular codebase design, unit testing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":109370.4,"maxValue":164055.6,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3f2cb60f-80a"},"title":"Senior Genome Editing Digital Enablement","description":"<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Enablement Scientist to join our team. As a key partner and enabler of multi-disciplinary teams, you will design large-scale data systems and analytical pipelines that power our gene editing efforts. You will develop analytical tools that connect biological and operations data to support more efficient and accurate decisions across the gene editing pipeline. Your expertise in both computational biology and genetics will be essential in driving and coordinating multi-functional teams to enable robust data connectivity and interoperability across the editing pipeline.</p>\n<p>In this role, you will lead cross-functional projects with IT, Data Engineering, Genome Editing, and other partner teams to automate decision making and connect data to accelerate development of gene-edited products. You will translate complex biological processes into scalable digital workflows that support decision making, advancement, and prioritization within the gene editing program. Your strong ability to collaborate and lead in cross-functional, multi-disciplinary teams will be crucial in influencing without authority and aligning diverse stakeholders around shared digital solutions.</p>\n<p>As a member of the Biology and Genome Design community, you will actively build your own acumen and capabilities while sharing best practices with others. You will serve as a key communicator and thought partner on digital enablement strategy, clearly articulating requirements, trade-offs, and opportunities to scientific and non-scientific stakeholders.</p>\n<p>We seek an incumbent who possesses a PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or another relevant scientific field with a minimum of 6 years of relevant experience, or an MS with 10+ years of experience developing data systems and analytics pipelines that enable product delivery using genetic and computational biology datasets.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3f2cb60f-80a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer Crop Science","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976613783","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,400.00 - $171,600.00","x-skills-required":["computational biology","genetics","data systems","analytical pipelines","Python","R","large-scale biological datasets"],"x-skills-preferred":["genome-wide association GWAs data","QTL analysis","candidate gene analysis","gene expression analysis","molecular marker development","pedigree data"],"datePosted":"2026-04-18T22:11:02.858Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chesterfield"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"computational biology, genetics, data systems, analytical pipelines, Python, R, large-scale biological datasets, genome-wide association GWAs data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, pedigree data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c9e9064e-d23"},"title":"AVP (Full Stack) - CRM","description":"<p>Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be.</p>\n<p>We are currently seeking an experienced professional to join our team in the role of AVP - CRM. As a key member of our team, you will be responsible for designing, formulating, implementing, and maintaining relevant web applications using Java/Python/ReactJS for customer relationship management and business insight.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design and develop automation solutions using Python to streamline and increase productivity of existing operation processes.</li>\n<li>Manage the timely delivery of output, and effectively communicate with all stakeholders.</li>\n<li>Open-minded, be ready to be challenged in a dynamic environment.</li>\n<li>Possess leadership skills to prepare a data analyst to complete decision-making and problem-solving tasks.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>University degree in Business Administration, Computer Science, Mathematics, Statistics or other related discipline with minimum 8 years&#39; working experience on data analytics.</li>\n<li>With solid experience of web application development, process automation, campaign leads / event triggers deployment, customer data analysis to drive omni-channel customer life cycle management.</li>\n<li>Strong knowledge on backend programming such as Java Springboot, Python FastAPI, restful API design and implementation, and React framework for frontend SPA, as well as database manipulation is a must.</li>\n<li>Experience in banking industry, customer relationship management, batch/real-time lead generation and decisioning, enterprise solution is a plus.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c9e9064e-d23","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774607947805","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Python","ReactJS","Springboot","FastAPI","restful API design","database manipulation"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:10:55.224Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Guangzhou"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Python, ReactJS, Springboot, FastAPI, restful API design, database manipulation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_79072a0c-85b"},"title":"Behavioral Data Science Intern - Agentic AI & People Analytics","description":"<p>Where do you want to go? What do you want to achieve? How would you like to get involved? At Bayer, we bring together multi-talents and specialists to feed the world, slow climate change, and create healthier, more sustainable lives for all.</p>\n<p>This is the opportunity to start your career with a global leader committed to HealthForAll and HungerForNone. Bring your ideas, skills, and passion with you. Your career starts here.</p>\n<p>Are you passionate about AI, data science, and behavioural insights? Join our Talent Impact team and apply your technical skills to projects that combine machine learning, generative AI, and behavioural science to improve how people work and develop. This internship offers hands-on experience in a supportive environment where you’ll learn, contribute, and make an impact.</p>\n<p>Your tasks and educational objectives:</p>\n<ul>\n<li>Work with HR and behavioural data to create structured, analysis-ready datasets for people analytics.</li>\n<li>Support development and testing agentic AI workflows (including LLM-based tools) that support HR decision-making.</li>\n<li>Help to build and evaluate machine learning models to explore workforce trends, learning behaviours, and engagement.</li>\n<li>Together with team members, create dashboards and visualisations that turn complex data into actionable insights for HR and business partners.</li>\n<li>Apply modern data workflows using Databricks, GitHub Spaces, and cloud platforms (Azure or AWS).</li>\n<li>Collaborate with experienced mentors and participate in small experiments to measure impact and share findings.</li>\n</ul>\n<p>Who you are:</p>\n<ul>\n<li>Python programming skills for data processing, modelling, and AI workflows.</li>\n<li>Hands-on experience with Generative AI (GenAI) or LLM-based systems (academic projects or internships count).</li>\n<li>Familiarity with cloud platforms (Azure or AWS), with a focus on Databricks and GitHub Spaces for collaborative development.</li>\n<li>Solid foundation in data science and machine learning.</li>\n<li>Strong interest in behavioural science, people analytics, and HR.</li>\n<li>Currently enrolled in a Master’s or advanced Bachelor’s program in data science, computer science, cognitive science, psychology, behavioural economics, neuroscience, or a related field.</li>\n<li>Curiosity, willingness to learn, and ability to work on-site in Leverkusen.</li>\n<li>Fluent English, written and spoken.</li>\n</ul>\n<p>What we offer:</p>\n<p>Our benefits package is flexible, appreciative, and tailored to your lifestyle, because what matters to you, matters to us!</p>\n<ul>\n<li>For a full-time position, you can expect an attractive salary of € 2,214 gross per month.</li>\n<li>Depending on the nature of your job, flexible work arrangements can be made in alignment with your manager.</li>\n<li>We support your growth through access to professional development and learning opportunities, such as LinkedIn Learning and our language learning platform Education First.</li>\n<li>As one of our perks, our Corporate Benefits program grants you access to sales discounts from more than 150 brands.</li>\n<li>We embrace diversity by providing an inclusive work environment in which you are welcomed, supported, and encouraged to bring your whole self to work.</li>\n</ul>\n<p>Ever feel burnt out by bureaucracy? Us too. That’s why we’re changing the way we work, for higher productivity, faster innovation, and better results. We call it Dynamic Shared Ownership (DSO). Learn more about what DSO will mean for you in your new role here https://www.bayer.com/en/strategy/strategy</p>\n<p>Our Mission &amp; Strategy:</p>\n<p>Through Dynamic Shared Ownership, we’re putting an end to the hierarchical model and putting more power in the hands of the innovators and creators at Bayer. Ready to join us? Apply now and start your 6-month learning journey in Leverkusen!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_79072a0c-85b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949975182354","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Python","Generative AI","LLM-based systems","Cloud platforms (Azure or AWS)","Databricks","GitHub Spaces","Data science","Machine learning"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:10:44.663Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Leverkusen"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Manufacturing","skills":"Python, Generative AI, LLM-based systems, Cloud platforms (Azure or AWS), Databricks, GitHub Spaces, Data science, Machine learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6bfc6b4-74f"},"title":"Senior Data Scientist - Marketing (all genders)","description":"<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers. Together, we build machine learning and analytics products that directly influence GMV, conversion, and retention.</p>\n<p>Within the department, we’re building a new Marketing Analytics team and are looking for a Senior Data Scientist to drive its data science initiatives. In this role, you’ll work closely with Analysts, Engineers, and Marketing stakeholders to develop and productionize advanced machine learning, statistical, and predictive models that improve marketing performance and drive measurable company growth.</p>\n<p>As a Senior Data Scientist – Marketing, you’ll take strong ownership of data science initiatives that directly shape our marketing strategy and growth. You will:</p>\n<p>Partner closely with Marketing, Marketing Analytics, and Marketing Technology to identify opportunities and translate business questions into scalable data science solutions.</p>\n<p>Lead the development of high-impact machine learning and statistical models for marketing use cases such as channel allocation, ad bidding, churn prediction, lifetime value, revenue attribution, and business metrics forecasting.</p>\n<p>Work end-to-end - from translating business questions into hypotheses to researching, building, validating, and deploying models.</p>\n<p>Run experiments and iterate in production: design A/B tests, monitor model performance, and continuously improve based on measured impact.</p>\n<p>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</p>\n<p>Help define the team&#39;s roadmap and ways of working as a founding member of Marketing Analytics - your input will help shape this function.</p>\n<p>Act as a senior role model in the team, sharing best practices and helping raise the bar for data science at Holidu.</p>\n<p>We&#39;re looking for someone with 5+ years of experience as a Data Scientist, with clear ownership of projects that delivered measurable business impact. You should have a degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field, and strong expertise in machine learning, statistics, and predictive analytics, with hands-on experience using Python and SQL.</p>\n<p>Experience with marketing data science use cases such as attribution modeling, customer lifetime value prediction, churn modeling, or bid optimization is also required. You should have a solid understanding of marketing concepts across channels (e.g. Performance Marketing, SEO, CRM, Affiliate) and how data science can improve them.</p>\n<p>Additionally, you should have experience working with modern data stacks, ideally including AWS (Redshift, Athena, S3), Airflow, dbt, and Git. A collaborative mindset paired with great communication skills is essential, as you&#39;ll need to work with diverse stakeholders and explain complex topics in a simple way.</p>\n<p>AI proficiency is also a plus, as you&#39;ll be comfortable using AI to enhance coding, planning, and monitoring, and successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</p>\n<p>If you&#39;re excited about the opportunity to shape the future of travel with products used by millions of guests and thousands of hosts, apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6bfc6b4-74f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2510157","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Machine Learning","Statistics","Predictive Analytics","Python","SQL","Marketing Data Science","Attribution Modeling","Customer Lifetime Value Prediction","Churn Modeling","Bid Optimization"],"x-skills-preferred":["AI","CI/CD Pipelines","Retraining Workflows","Lineage Tracking","Documentation","Airflow","dbt","Git"],"datePosted":"2026-04-18T22:10:24.739Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Statistics, Predictive Analytics, Python, SQL, Marketing Data Science, Attribution Modeling, Customer Lifetime Value Prediction, Churn Modeling, Bid Optimization, AI, CI/CD Pipelines, Retraining Workflows, Lineage Tracking, Documentation, Airflow, dbt, Git"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52261e57-a37"},"title":"Senior Software Engineer - Revenue Management (all genders)","description":"<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p>You&#39;ll work with modern tooling, a cross-functional team, and teammates who care deeply about impact, collaboration, and learning together.</p>\n<p>As a Senior Software Engineer - Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p>Your key responsibilities will include:</p>\n<ul>\n<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n</ul>\n<ul>\n<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n</ul>\n<ul>\n<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n</ul>\n<ul>\n<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n</ul>\n<ul>\n<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n</ul>\n<ul>\n<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>\n</ul>\n<ul>\n<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p>You don&#39;t need to meet every requirement , we&#39;re looking for strong fundamentals, ownership, and the motivation to grow.</p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n</ul>\n<ul>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n</ul>\n<ul>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n</ul>\n<ul>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n</ul>\n<ul>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n</ul>\n<ul>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n</ul>\n<ul>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>\n<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>\n<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>\n<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>\n<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>\n<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52261e57-a37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597551","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Infrastructure-as-code","Cloud platforms","ML model deployment"],"x-skills-preferred":["LLM tools and agents","Data science models","Reliable and scalable production systems"],"datePosted":"2026-04-18T22:10:23.434Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, ML model deployment, LLM tools and agents, Data science models, Reliable and scalable production systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02c944ab-f9e"},"title":"Senior Data Scientist - Dynamic Pricing & Revenue Management (all genders)","description":"<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Analyst and Data Engineer. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic and data driven pricing strategy.</p>\n<p>You&#39;ll work with a large and rich dataset, modern tooling, and teammates who care deeply about impact, collaboration, and learning together. This role is based in Munich with 3 office days per week.</p>\n<p>As a Senior Data Scientist, you&#39;ll take ownership of complex pricing and forecasting models and help us turn analytical ideas into real-world impact for hosts and Holidu. You will:</p>\n<ul>\n<li>Translate business questions into scientific, testable models and clear recommendations.</li>\n<li>Design, build and own machine learning, forecasting and predictive models for revenue management topics such as demand forecasting, price sensitivity, and conversion probability.</li>\n<li>Explore and develop dynamic pricing strategies (e.g. weekend pricing, early discounts, regional similarities) using data and experimentation.</li>\n<li>Collaborate closely with Data Analysts and Data Engineers to define datasets, features, and model requirements.</li>\n<li>Drive discussions around model choice, assumptions, and trade-offs, always keeping business impact in mind.</li>\n<li>Monitor model performance, iterate on results, and continuously improve accuracy and relevance.</li>\n<li>Act as a senior sparring partner in the team, sharing knowledge and raising the bar for data science practices.</li>\n</ul>\n<p>You&#39;ll have 5+ years of experience as a Data Scientist, solving a variety of different business problems. You&#39;ll have a strong background in statistics, forecasting, and machine learning. You&#39;ll be hands-on with Python and SQL, and confident working with large datasets. You&#39;ll have a strong interest in pricing, revenue optimization, or marketplace dynamics (prior revenue management experience is a plus, not a must).</p>\n<p>You&#39;ll be a self-starter: proactive, hungry to learn, and eager to make an impact. You&#39;ll be able to communicate complex ideas clearly and collaborate with technical and non-technical partners.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02c944ab-f9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2518625","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Machine Learning","Forecasting","Predictive Modeling","Data Science","Data Analysis","Data Engineering"],"x-skills-preferred":["Dynamic Pricing","Revenue Optimization","Marketplace Dynamics","Cloud Computing","Big Data","Data Visualization"],"datePosted":"2026-04-18T22:10:08.998Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Machine Learning, Forecasting, Predictive Modeling, Data Science, Data Analysis, Data Engineering, Dynamic Pricing, Revenue Optimization, Marketplace Dynamics, Cloud Computing, Big Data, Data Visualization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f156ea4b-6a3"},"title":"Senior DataOps Engineer / Software Engineer - Revenue Management (all genders)","description":"<p>Join our Dynamic Pricing &amp; Revenue Management team as a Senior DataOps Engineer / Software Engineer. You&#39;ll work alongside a Data Scientist and a Data Analyst to develop a smart, dynamic, and data-driven pricing strategy. Our team uses modern tooling, including S3, Redshift, Athena, DuckDB, MLflow, SageMaker, Terraform, Docker, Jenkins, and AWS EKS.</p>\n<p>As a Senior DataOps Engineer / Software Engineer, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You&#39;ll bridge the gap between data science models and reliable, scalable production systems.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>\n<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p>We&#39;re looking for someone with 4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps. You should have strong hands-on skills in Python, experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform), familiarity with cloud platforms (AWS preferred), and deploying services in production. Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</p>\n<p>Our team is passionate about using cutting-edge LLM tools and agents to improve productivity. We&#39;re looking for someone who is proactive, hands-on, and takes ownership of problems and drives solutions forward.</p>\n<p>Benefits include:</p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment with a pace of a scale-up combined with the stability of a proven business model.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p>If you&#39;re interested in joining our team, apply online on our careers page! Your first travel contact will be Katharina from HR.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f156ea4b-6a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2523360","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Infrastructure-as-code","Cloud platforms","Deploying services in production"],"x-skills-preferred":["ML model deployment","LLM tools and agents"],"datePosted":"2026-04-18T22:10:00.244Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, Deploying services in production, ML model deployment, LLM tools and agents"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52ea5e8c-da4"},"title":"Corporate Sales Associate","description":"<p>In compliance with applicable laws, HSBC is committed to employing only those who are authorised to work in the US. As a Corporate Sales Associate, you will support the Corporate Sales Chief Operating Officer (COO) function and Corporate Sales Team globally in Business Development, Sales Support, Client Service, Booking and execution, Market Research and Insights, Automation etc.</p>\n<p>The position holder will be part of the Corporate Sales US team supporting Front Office sales team, Management, COO through the provision of key client insights and services. You will work closely with the COO office, Corporate Sales regional/country heads and sales leads in onshore locations.</p>\n<p>Your responsibilities will include executing market research and market commentary writ-ups, building actionable intelligence across the corporate client base, maintaining Corporate Sales Marketing content hub, handling global stakeholders, incorporating external market information into the analytics function, preparing pre-meeting client packs for sales team members, and providing commentary on industry trends.</p>\n<p>You will also support Corporate Sales regional heads on ad-hoc as well as Business as Usual (BAU) data and other ad-hoc requests.</p>\n<p>As an HSBC employee, you will have access to tailored professional development opportunities to ensure you have the right skills for today and tomorrow. We offer a competitive pay and benefits package including a robust Wellness Hub, all in a welcoming and inclusive work environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52ea5e8c-da4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610372838","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Direct experience in Banking and Financial Services/Research Companies/Banking Information Technology (IT), Business Analytics, Business Intelligence (BI) Reporting","Well versed on what is currently happening globally about regulations, FX spot, FX forwards, FX options, money market products, interest rates, Swaps and Non-Deliverable Forward (NDF)","Hands on use of Tableau, Alteryx, Qlik Sense or any other visualization tool","Knowledge in VBA, SQL, Python, automation tools","Able to write market commentaries, CCY Pair movement Summaries & Impact of key announcements to currency markets"],"x-skills-preferred":["Expert in Microsoft Office specially in Excel and PowerPoint","Flexibility or adapt to support Asia, Europe, Middle East, and Africa (EMEA), as well as US stakeholders, across different time zones","Able to work independently, proactively and against multiple deadlines"],"datePosted":"2026-04-18T22:09:54.946Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Direct experience in Banking and Financial Services/Research Companies/Banking Information Technology (IT), Business Analytics, Business Intelligence (BI) Reporting, Well versed on what is currently happening globally about regulations, FX spot, FX forwards, FX options, money market products, interest rates, Swaps and Non-Deliverable Forward (NDF), Hands on use of Tableau, Alteryx, Qlik Sense or any other visualization tool, Knowledge in VBA, SQL, Python, automation tools, Able to write market commentaries, CCY Pair movement Summaries & Impact of key announcements to currency markets, Expert in Microsoft Office specially in Excel and PowerPoint, Flexibility or adapt to support Asia, Europe, Middle East, and Africa (EMEA), as well as US stakeholders, across different time zones, Able to work independently, proactively and against multiple deadlines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b447835-74a"},"title":"Senior DataOps Engineer - Revenue Management (all genders)","description":"<p><strong>Your future team</strong></p>\n<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>\n<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>\n<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>\n<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>\n<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>\n<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>\n<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b447835-74a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597559","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Terraform","Cloud platforms (AWS preferred)","ML model deployment (MLflow, SageMaker, or similar)"],"x-skills-preferred":["AI tools like Claude, Copilot, and Codex","Data Storage & Querying (S3, Redshift, Athena, DuckDB)","ML & Model Serving (MLflow, SageMaker, deployment APIs)","Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS)","Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools)","Ingestion (Kafka-based event systems, Airbyte, Fivetran)"],"datePosted":"2026-04-18T22:09:42.352Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage & Querying (S3, Redshift, Athena, DuckDB), ML & Model Serving (MLflow, SageMaker, deployment APIs), Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_39c55814-2f7"},"title":"Manager, Fraud Analytics","description":"<p>In this role, you will drive fraud analytics capability for United States IWPB by owning end-to-end analytic processes. You&#39;ll develop deep expertise in fraud data, detection performance, and client impact, turning insights into measurable reductions in fraud losses and friction while improving controls and decisioning.</p>\n<p>As our Manager, Fraud Analytics, you will:</p>\n<ul>\n<li>Define requirements, manage delivery cadence, document methodologies, and ensure controls/traceability across the analytics lifecycle</li>\n<li>Analyze fraud trends, typologies, and emerging threats using internal/external data to identify root causes and actionable interventions</li>\n<li>Monitor and improve key metrics (e.g., fraud loss rate, detection/true positive rate, false positives, client friction, alert volumes, operational capacity impacts)</li>\n<li>Recommend and test changes to rules, thresholds, segmentation, and model features</li>\n<li>Produce clear recommendations for Fraud Management, Operations, Product, Digital, and Technology, support implementation and post-change validation</li>\n<li>Create concise dashboards and narratives that connect fraud decisions to client experience and business outcomes</li>\n<li>Ensure analyses and changes are well-controlled, auditable, and aligned to relevant policies, model risk expectations, and regulatory considerations</li>\n</ul>\n<p>You&#39;ll likely have the following qualifications to succeed in this role:</p>\n<ul>\n<li>Analytics experience in fraud, financial crime, risk analytics, or payments (banking preferred)</li>\n<li>Advanced capability in SAS/SQL and Python or R</li>\n<li>Proven ability to translate complex analysis into business decisions and influence cross-functional partners without formal authority</li>\n<li>Experience with card, digital payments, digital authentication, account takeover, application fraud, or transaction monitoring</li>\n<li>Dashboarding experience (e.g., QlikSense, Tableau, Power BI)</li>\n<li>Analytical depth, curiosity, and structured problem-solving</li>\n<li>Data storytelling and stakeholder management</li>\n<li>Pragmatic, outcome-driven mindset</li>\n<li>Strong attention to detail and control discipline</li>\n<li>Collaborative working style</li>\n</ul>\n<p>As an HSBC employee, you will have access to tailored professional development opportunities to ensure you have the right skills for today and tomorrow. We offer a competitive pay and benefits package including a robust Wellness Hub, all in a welcoming and inclusive work environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_39c55814-2f7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intl Wealth & Premier Banking","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610398923","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Analytics experience in fraud, financial crime, risk analytics, or payments","Advanced capability in SAS/SQL and Python or R","Proven ability to translate complex analysis into business decisions and influence cross-functional partners without formal authority","Experience with card, digital payments, digital authentication, account takeover, application fraud, or transaction monitoring","Dashboarding experience (e.g., QlikSense, Tableau, Power BI)"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:09:35.474Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Analytics experience in fraud, financial crime, risk analytics, or payments, Advanced capability in SAS/SQL and Python or R, Proven ability to translate complex analysis into business decisions and influence cross-functional partners without formal authority, Experience with card, digital payments, digital authentication, account takeover, application fraud, or transaction monitoring, Dashboarding experience (e.g., QlikSense, Tableau, Power BI)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_df410cc0-50b"},"title":"Equity Derivatives Structurer","description":"<p>As an Equities Structurer, you&#39;ll play a pivotal role in driving innovation and delivering tailored solutions for our clients. You&#39;ll collaborate closely with trading, quantitative, and sales teams to design, price, and implement structured equity products, ensuring we remain at the forefront of the market.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Drive digital transformation by leading digitalisation and automation initiatives to streamline business processes and enhance operational efficiency</li>\n<li>Enhance pricing infrastructure by developing and optimising pricing tools and systems, ensuring robust and scalable solutions for both indicative and live pricing of structured transactions</li>\n<li>Innovate products by designing and launching new structured products, including Structured Solutions, Risk Recycling, and Quantitative Investment Strategies (QIS)</li>\n<li>Deliver client solutions by partnering with sales and structuring teams to assess client needs and provide bespoke solutions aligned with their objectives</li>\n<li>Generate market analysis by creating thematic investment ideas based on evolving market conditions and trends</li>\n<li>Prepare pitchbooks by creating compelling marketing materials to promote proprietary indices and structured products</li>\n<li>Manage secondary market activities by overseeing secondary market pricing, including add-ons and unwind transactions</li>\n<li>Collaborate cross-functionally by working with traders and quantitative analysts to enhance pricing and back-testing infrastructure (Python and VBA)</li>\n<li>Provide business insight by delivering analytics and reporting to support business decision-making and strategic planning</li>\n</ul>\n<p>Your Qualifications:</p>\n<ul>\n<li>Market Knowledge: Strong understanding of equity derivative fundamentals</li>\n<li>Analytical Excellence: Exceptional analytical, problem-solving, and decision-making skills</li>\n<li>Communication Skills: Outstanding verbal and written communication abilities, with a talent for explaining complex concepts clearly</li>\n<li>Agility Under Pressure: Ability to manage multiple priorities and perform effectively in a fast-paced, high-pressure environment</li>\n<li>Technical Proficiency: Experience with Python and Excel/VBA is highly desirable</li>\n</ul>\n<p>As an HSBC employee, you will have access to tailored professional development opportunities to ensure you have the right skills for today and tomorrow. We offer a competitive pay and benefits package including a robust Wellness Hub, all in a welcoming and inclusive work environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_df410cc0-50b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774609816845","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["equity derivatives","digital transformation","pricing infrastructure","structured products","market analysis","pitchbooks","secondary market activities","collaboration","business insight","Python","Excel/VBA"],"x-skills-preferred":["data analysis","financial modeling","risk management"],"datePosted":"2026-04-18T22:09:29.168Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"equity derivatives, digital transformation, pricing infrastructure, structured products, market analysis, pitchbooks, secondary market activities, collaboration, business insight, Python, Excel/VBA, data analysis, financial modeling, risk management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80d15de9-aa7"},"title":"Senior Data Scientist - Rankings & Recommendations (all genders)","description":"<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers.</p>\n<p>You will join a cross-functional Product team, Search Intelligence, which is responsible for optimizing ranking and recommendations for users visiting our website.</p>\n<p>You&#39;ll be part of the broader Data Science team, which operates across cross-functional domain teams - giving you access to shared knowledge, best practices, and collaboration opportunities beyond your domain.</p>\n<p>You’ll collaborate daily with Data Engineers, Analysts, Product Managers, and Back-end Engineers.</p>\n<p>You’ll report to the Team Lead, Data Science.</p>\n<p>Together, we turn data into actionable insights and innovative technology that powers how millions of guests find and book their perfect holiday home.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Python • Airflow • dbt • AWS (SageMaker, Redshift, Athena) • MLflow</li>\n</ul>\n<p>The Ranking challenge at Holidu</p>\n<p>Holidu lists over 4 million vacation rental properties. Our ranking and personalization systems determine which of them our 70+ million annual users see, directly impacting search conversion and business results.</p>\n<p>What&#39;s live today:</p>\n<ul>\n<li>Multi-stage ranking pipeline: Reinforcement-learning-based cold ranking, contextual re-ranking, and personalized recommendations.</li>\n</ul>\n<ul>\n<li>Cold-start models for new properties with limited behavioral data.</li>\n</ul>\n<ul>\n<li>Personalized recommendations based on user browsing patterns.</li>\n</ul>\n<p>Some of the hard problems we&#39;re solving:</p>\n<ul>\n<li>Multi-objective optimization: Balancing user relevance, conversion probability, and business value.</li>\n</ul>\n<ul>\n<li>Personalization without history: Most users are anonymous or first-time visitors.</li>\n</ul>\n<ul>\n<li>Cold-start: A significant share of our inventory is new each quarter. How do we surface quality properties before we have behavioral data?</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>You&#39;ll shape the ranking and recommendation systems that millions of guests rely on to find their holiday home. With access to extensive datasets and modern ML infrastructure, you&#39;ll work end-to-end - from identifying opportunities and prototyping new approaches to shipping models to production and measuring their impact.</p>\n<ul>\n<li>Develop high-impact models and improvements for our ranking, recommendation, and personalization systems - with the freedom to explore new, creative approaches.</li>\n</ul>\n<ul>\n<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>\n</ul>\n<ul>\n<li>Design and run A/B tests as a core part of ranking development; success is measured by successful experiments per quarter and time-to-decision.</li>\n</ul>\n<ul>\n<li>Collaborate closely with Product Managers and Software Engineers to identify, prioritize, and ship ranking improvements.</li>\n</ul>\n<ul>\n<li>Ensure model reliability in production, measured by online/offline agreement, model and data drift KPIs, latency and uptime SLAs, and automated monitoring coverage.</li>\n</ul>\n<ul>\n<li>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</li>\n</ul>\n<ul>\n<li>Demonstrate leadership in data science projects by driving technical direction, scoping initiatives, and guiding the team&#39;s prioritization and project execution.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>5+ years of experience as a Data Scientist, with a proven track record of applying ML models to solve real business problems.</li>\n</ul>\n<ul>\n<li>Experience working on ranking models or recommender systems is a strong advantage.</li>\n</ul>\n<ul>\n<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>\n</ul>\n<ul>\n<li>Strong foundations in statistics, predictive modeling, and machine learning techniques, with hands-on experience using Python and SQL.</li>\n</ul>\n<ul>\n<li>Experience with Airflow and dbt is a plus.</li>\n</ul>\n<ul>\n<li>Solid understanding of business operations and the ability to translate data insights into clear, actionable outcomes.</li>\n</ul>\n<ul>\n<li>A collaborative mindset and enthusiasm for using data to build world-class products that make a real impact.</li>\n</ul>\n<ul>\n<li>AI Proficiency: You are comfortable using AI to enhance coding, planning, and monitoring. This includes successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>\n</ul>\n<ul>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>\n</ul>\n<ul>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>\n</ul>\n<ul>\n<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>\n</ul>\n<ul>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>\n</ul>\n<ul>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>\n</ul>\n<p>Need a sneak peek? Check out the adventure that awaits you on Instagram @lifeatholidu and dive straight into the world of Tech at Holidu for more insights!</p>\n<p><strong>Want to travel with us?</strong></p>\n<p>Apply online on our careers page! Your first travel contact will be Lucia from HR.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80d15de9-aa7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2413808","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","Airflow","dbt","AWS","MLflow","Machine Learning","Statistics","Predictive Modeling","SQL"],"x-skills-preferred":["AI","Data Science","Ranking Models","Recommender Systems","Collaboration","Communication"],"datePosted":"2026-04-18T22:09:15.403Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Airflow, dbt, AWS, MLflow, Machine Learning, Statistics, Predictive Modeling, SQL, AI, Data Science, Ranking Models, Recommender Systems, Collaboration, Communication"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_748cd251-fda"},"title":"Global Inventory Management Analyst","description":"<p>The opportunity We are seeking a Global Inventory Management Analyst to join our Global Sales Operations team. This role will involve building transparency around our global inventory and supporting data-driven decisions across the organisation.</p>\n<p>The responsibilities In this role, you will analyse global inventory performance and provide data-driven insights to key stakeholders. You will also support sales decision-making through inventory sell-down analysis and provide data for C-level reporting and presentations.</p>\n<p>The ideal candidate To succeed in this role, you will have a strong analytical ability, excellent attention to detail, and excellent communication skills. You will also have experience with data analysis tools and reporting solutions such as Power BI or Qlik, and excellent Excel skills.</p>\n<p>Benefits We offer a hybrid way of working that balances onsite collaboration with individual focus time. Occasional travel may occur.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_748cd251-fda","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Polestar","sameAs":"https://polestar.teamtailor.com","logo":"https://logos.yubhub.co/polestar.teamtailor.com.png"},"x-apply-url":"https://polestar.teamtailor.com/jobs/7500553-global-inventory-management-analyst","x-work-arrangement":"Hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data analysis","Power BI","Qlik","Excel","Python"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:06:00.572Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Göteborg, Sweden"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Automotive","skills":"data analysis, Power BI, Qlik, Excel, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5d48ddb1-b45"},"title":"Mission Software Engineering Manager, Public Sector","description":"<p>We are looking for a Mission Software Engineering Manager to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>\n<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Recruit a high-performing engineering team.</li>\n<li>Drive engineering productivity. Provide guidance, mentorship, and technical leadership to a team of engineers working on Generative AI projects.</li>\n<li>Collaborating with cross-functional teams to define, design, and execute strategic roadmap.</li>\n<li>Work directly with customers to understand their problems and translate those into features in Scale’s platform.</li>\n<li>Be open to ~25% travel or relocation to a key customer geographic location.</li>\n<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>\n<li>Implement end-to-end data integrations, syncing customer’s data to Scale’s platform and back.</li>\n<li>Deploy and maintain Scale software at customer sites</li>\n<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>\n<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>\n<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of full-time engineering experience, post-graduation</li>\n<li>2+ years of prior engineering management or equivalent experience and has managed an engineering team.</li>\n<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>\n<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>\n<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus</li>\n<li>Linux experience: Understanding of shell scripting, operating systems, etc</li>\n<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>\n<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles</li>\n<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles</li>\n<li>Understand unique DoD and USG constraints when it comes to technology</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5d48ddb1-b45","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4631039005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$273,700-$341,550 USD","x-skills-required":["Python","JavaScript","Cloud-Native Technologies","Linux","Networking","Data Engineering","Problem Solving"],"x-skills-preferred":["Node","React","Docker","Kubernetes"],"datePosted":"2026-04-18T16:01:54.249Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript, Cloud-Native Technologies, Linux, Networking, Data Engineering, Problem Solving, Node, React, Docker, Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":273700,"maxValue":341550,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_38e51e8f-9b2"},"title":"Lead Technical Program Manager, Trust & Safety","description":"<p>About the Role</p>\n<p>Scale is at the frontier of GenAI and human-AI collaboration. The Gen AI Ops Trust and Safety team is focused on safeguarding human authenticity and genuineness in AI training.</p>\n<p>We are looking for a highly analytical Technical Program Manager (TPM) who leans heavily into fraud analytics and data-driven strategy to protect our ecosystem. This isn&#39;t a project management role. You will act as the lead investigative analyst and program owner for our fraud defense portfolio.</p>\n<p>Your day-to-day will involve diving deep into complex datasets to uncover hidden fraud vectors, and then translating those analytical insights into scalable rules, policies, and operational programs. By utilizing AI coding tools at high velocity, you will build out analytics pipelines, dashboards, and detection logic to shift Trust and Safety from a reactive function to a strategic one that balances safety and growth.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Analyze large, messy behavioural events to identify ambiguous and constantly evolving fraud patterns across the contributor lifecycle.</li>\n</ul>\n<ul>\n<li>Translate your analytical findings into actionable detection logic. You will redesign rules, optimize thresholds, and decision flows to catch bad actors while minimizing friction for high-quality contributors.</li>\n</ul>\n<ul>\n<li>Establish robust KPIs, build tracking dashboards, and define offline evaluation frameworks (e.g., false positive monitoring, precision/recall analysis) to continuously measure the health of our risk strategy.</li>\n</ul>\n<ul>\n<li>Act as the connective tissue between data, operations, and engineering. You will use your analytical findings to implement technical execution, taking new detection capabilities from data prototype to production deployment.</li>\n</ul>\n<ul>\n<li>Leverage AI-assisted IDEs daily to rapidly write complex SQL queries, automate data pulls, and streamline your analytical workflows.</li>\n</ul>\n<ul>\n<li>Connect signals, data, and operations to see the full picture. Provide clear, direct communication regarding fraud trends and strategy shifts to both technical and non-technical partners.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5-8 years of experience in risk strategy and fraud analytics. You have a battle-tested track record of reverse-engineering adversarial patterns, dismantling complex fraud vectors, and driving highly analytical Trust &amp; Safety or TPM programs.</li>\n</ul>\n<ul>\n<li>Expert proficiency in SQL skills. You must be highly comfortable extracting insights from large datasets in noisy, adversarial environments.</li>\n</ul>\n<ul>\n<li>An execution-driven mindset focused on delivering measurable results, not just theoretical analysis. You are comfortable working in ambiguity and taking ownership from data problems to operational solutions.</li>\n</ul>\n<ul>\n<li>Strong proficiency with AI coding assistants to accelerate data exploration and query writing.</li>\n</ul>\n<ul>\n<li>Deep understanding of how to balance aggressive fraud detection with marketplace growth. You make decisions based on what&#39;s right for the business, not what&#39;s convenient.</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Solid Python skills (e.g., Pandas, NumPy) for advanced data manipulation and scripting</li>\n</ul>\n<ul>\n<li>Experience working in marketplace or gig-economy platforms is a plus</li>\n</ul>\n<p>Compensation</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>Salary Range</p>\n<p>The base salary range for this full-time position in the location of San Francisco is: $180,800-$226,000 USD</p>\n<p>Benefits</p>\n<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>About Us</p>\n<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>\n<p>We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.</p>\n<p>We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.</p>\n<p>We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com.</p>\n<p>Please see the United States Department of Labor&#39;s Know Your Rights poster for additional information.</p>\n<p>We comply with the United States Department of Labor&#39;s Pay Transparency provision.</p>\n<p>PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_38e51e8f-9b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4674924005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,800-$226,000 USD","x-skills-required":["fraud analytics","data-driven strategy","SQL skills","AI coding assistants","Python skills"],"x-skills-preferred":["Pandas","NumPy","experience working in marketplace or gig-economy platforms"],"datePosted":"2026-04-18T16:01:32.183Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"fraud analytics, data-driven strategy, SQL skills, AI coding assistants, Python skills, Pandas, NumPy, experience working in marketplace or gig-economy platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180800,"maxValue":226000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_978310df-422"},"title":"Staff FullStack Software Engineer, (Forward Deployed), GPS","description":"<p>We&#39;re seeking a Full Stack Software Engineer to join our International Public Sector team. As a Full Stack Software Engineer, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>You will serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation.</p>\n<p>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes.</p>\n<p>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments.</p>\n<p>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases.</p>\n<p>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture.</p>\n<p>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team.</p>\n<p>Ideally, you&#39;d have a Master&#39;s or PhD in Computer Science or equivalent deep industry experience in architecting complex, distributed systems.</p>\n<p>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP).</p>\n<p>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability.</p>\n<p>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments.</p>\n<p>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization.</p>\n<p>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction.</p>\n<p>Proficient in Arabic.</p>\n<p>Nice to haves include past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role, experience working cross functionally with operations, and a proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_978310df-422","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4673314005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Node.js","React","Kubernetes","Cloud infrastructure","AI","LLMs","Cloud computing","Security","Scalability","Distributed systems"],"x-skills-preferred":["Arabic","Startup experience","CTO experience","Founding engineer experience","Forward deployed engineer experience","Customer engineer experience","Operations experience","LLM-driven solutions"],"datePosted":"2026-04-18T16:01:27.211Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, LLMs, Cloud computing, Security, Scalability, Distributed systems, Arabic, Startup experience, CTO experience, Founding engineer experience, Forward deployed engineer experience, Customer engineer experience, Operations experience, LLM-driven solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a42f26c-511"},"title":"Evals Engineer, Applied AI","description":"<p>We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry&#39;s leading GenAI Evaluation Suite.</p>\n<p>As a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise, you will partner with Scale&#39;s Operations team and enterprise customers to translate ambiguity into structured evaluation data. This involves guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.</p>\n<p>Your responsibilities will also include analysing feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments. You will design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems, including creating models that critique, grade, and explain agent outputs.</p>\n<p>To succeed in this role, you will need a strong foundational knowledge of large language models, a passion for tackling complex evaluation challenges, and the ability to thrive in a dynamic, fast-paced research environment. You should be able to think outside the box, stay current with the latest literature in AI evaluation, and be passionate about integrating novel research ideas into our workflows to build best-in-class evaluation systems.</p>\n<p>In addition to your technical expertise, you will need excellent communication and collaboration skills, as you will work closely with cross-functional teams to drive project success.</p>\n<p>If you are a motivated and detail-oriented individual with a passion for AI research and evaluation, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a42f26c-511","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4629589005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","Large Language Models","Generative AI","Machine Learning","Applied Research","Evaluation Infrastructure"],"x-skills-preferred":["Advanced degree in Computer Science, Machine Learning, or a related quantitative field","Published research in leading ML or AI conferences","Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems","Experience collaborating with operations or external teams to define high-quality human annotator guidelines","Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis"],"datePosted":"2026-04-18T16:01:26.736Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, Large Language Models, Generative AI, Machine Learning, Applied Research, Evaluation Infrastructure, Advanced degree in Computer Science, Machine Learning, or a related quantitative field, Published research in leading ML or AI conferences, Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems, Experience collaborating with operations or external teams to define high-quality human annotator guidelines, Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c19e39af-feb"},"title":"Full-Stack Software Engineer, (Forward Deployed), GPS","description":"<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>\n<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>\n<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>\n<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>\n<li>Help build and refine reusable features that support diverse international client use cases.</li>\n<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>\n</ul>\n<p><strong>Ideal Candidate:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>\n<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>\n<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>\n<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Proficient in Arabic</li>\n<li>Experience working cross functionally with operations</li>\n<li>Experience building solutions with LLMs</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c19e39af-feb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4676602005","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","TypeScript","Python","Node.js","Next.js","NoSQL/Relational databases","containerization (Docker)","cloud deployments"],"x-skills-preferred":["Arabic","experience working cross functionally with operations","experience building solutions with LLMs"],"datePosted":"2026-04-18T16:01:21.167Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dubai, UAE"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, experience working cross functionally with operations, experience building solutions with LLMs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b40b693d-a0d"},"title":"Senior Software Engineer, Agentic Data Products","description":"<p>We&#39;re forming a new Agentic Data Products team focused on building the next generation of agent-powered tools that ground AI in real operational workflows. Our goal is to help enterprises demystify their data layers and deploy intelligent, agentic systems that can reason over data, take action, and deliver measurable outcomes.</p>\n<p>This is a 0→1 build team. We’re looking for a sharp, product-minded Senior Engineer who thrives in ambiguity, moves quickly, and enjoys building new systems from scratch alongside customers and cross-functional partners. You’ll work closely with product, forward-deployed engineers, data scientists, and applied AI teams to turn real-world problems into scalable, production solutions.</p>\n<p>If you like shipping fast, owning outcomes, and working across the stack,from polished frontends to distributed backends to LLM integrations,this role is for you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own major full-stack product areas, driving features from concept and design through production deployment</li>\n<li>Build intuitive, high-performance frontend experiences using React + TypeScript</li>\n<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and AI/ML infrastructure</li>\n<li>Integrate LLMs, vector databases, and agentic frameworks to power intelligent workflows and decision-making systems</li>\n<li>Ship quickly through tight experimentation loops while maintaining high quality and reliability</li>\n<li>Help define the technical direction and architecture of a brand-new team and product surface</li>\n<li>Adapt across the stack and learn new tools as needed to solve real problems end-to-end</li>\n</ul>\n<p><strong>Ideal Experience</strong></p>\n<ul>\n<li>5+ years of full-time software engineering experience</li>\n<li>0-1 product build experience</li>\n<li>Familiarity with LLMs, embeddings, vector databases, or modern AI data products/tools</li>\n<li>Experience with distributed systems and cloud-based architectures</li>\n<li>Prior experience mentoring or leading team</li>\n</ul>\n<p><strong>What We Value</strong></p>\n<ul>\n<li>Strong product intuition and customer empathy</li>\n<li>Bias toward action and rapid iteration</li>\n<li>Ownership mentality , you see problems through to outcomes</li>\n<li>Comfort collaborating across engineering, product, data science, and applied AI</li>\n<li>Excitement about building agentic systems that make AI genuinely useful in the real world</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b40b693d-a0d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4653827005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["React","TypeScript","Python","Distributed systems","Data pipelines","AI/ML infrastructure","LLMs","Vector databases","Agentic frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:01:14.176Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Python, Distributed systems, Data pipelines, AI/ML infrastructure, LLMs, Vector databases, Agentic frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2d16873c-e17"},"title":"Full-Stack Software Engineer, (Forward Deployed), GPS","description":"<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>\n<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>\n<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>\n<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>\n<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>\n<li>Help build and refine reusable features that support diverse international client use cases.</li>\n<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>\n<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>\n<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>\n<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>\n</ul>\n<p><strong>Nice to Haves</strong></p>\n<ul>\n<li>Proficient in Arabic</li>\n<li>Experience working cross functionally with operations</li>\n<li>Experience building solutions with LLMs</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2d16873c-e17","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4676600005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","TypeScript","Python","Node.js","Next.js","NoSQL/Relational databases","containerization (Docker)","cloud deployments"],"x-skills-preferred":["Arabic","cross functional collaboration","LLM solutions"],"datePosted":"2026-04-18T16:01:13.044Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, cross functional collaboration, LLM solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_098d7159-ae8"},"title":"Software Engineer - New Grad","description":"<p>At Scale, we&#39;re looking for a talented Software Engineer to join our team. As a member of our team, you will play a key role in accelerating the development of AI applications. You will work closely with our team to design, develop, and deploy scalable software solutions that meet the needs of our customers.</p>\n<p>Our team is responsible for building and maintaining a range of software systems, including our labeling platform, fraud-detection systems, and customer service RAG application. You will have the opportunity to work on a variety of projects, from building methodical fraud-detection systems to devise advanced matching algorithms to match labelers to customers for optimal turnaround and accuracy.</p>\n<p>We&#39;re looking for someone with a strong background in software engineering, excellent problem-solving skills, and a passion for working with AI technologies. You should be comfortable working in a fast-paced environment and be able to collaborate effectively with our team.</p>\n<p>In this role, you will have the opportunity to work on a range of exciting projects, including:</p>\n<ul>\n<li>Building methodical fraud-detection systems to remove bad actors and keep Scale&#39;s contributor base safe and trusted.</li>\n<li>Using models to estimate the quality of tasks and labelers, and guarantee quality on requests at large scale.</li>\n<li>Devise advanced matching algorithms to match labelers to customers for optimal turnaround and accuracy.</li>\n<li>Build methods to automatically measure, train, and optimally match labelers to tasks based on performance.</li>\n<li>Create optimized and efficient UI/UX tooling, in combination with ML algorithms, for 100k+ labelers to complete billions of complex tasks.</li>\n</ul>\n<p>If you&#39;re passionate about software engineering and AI, and want to be part of a dynamic team that&#39;s making a real impact, we&#39;d love to hear from you.</p>\n<p>Requirements:</p>\n<ul>\n<li>A graduation date in Fall 2025 or Spring 2026 with a Bachelor’s degree (or equivalent) in a relevant field (Computer Science, EECS, Computer Engineering, Statistics).</li>\n<li>Product engineering experience such as building web apps full-stack, integrating with relevant APIs and services, talking to customers, figuring out ‘what’ to build and then iterating.</li>\n<li>Previous Product/Software Engineering Internship experience.</li>\n<li>Track record of shipping high-quality products and features at scale.</li>\n<li>Experience building systems that process large volumes of data.</li>\n<li>Experience with Python, Typescript, React, and/or MongoDB.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of San Francisco is $124,000-$155,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_098d7159-ae8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4605996005","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":"$124,000-$155,000 USD","x-skills-required":["Python","Typescript","React","MongoDB","Product engineering","Full-stack development","API integration","Customer-facing communication","Problem-solving"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:01:08.866Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Typescript, React, MongoDB, Product engineering, Full-stack development, API integration, Customer-facing communication, Problem-solving","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":124000,"maxValue":155000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a8d34aff-3e5"},"title":"Applied AI Engineer, Global Public Sector","description":"<p>We&#39;re hiring Applied AI Engineers to build custom end-to-end AI applications for our public sector clients using the latest developments in the field of AI.</p>\n<p>You will partner with public sector clients to deeply understand their challenges and define AI-driven solutions.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building and deploying end-to-end AI applications into production leveraging latest developments from the biggest AI labs, and open source models</li>\n<li>Collaborating with cross-functional teams, including data annotation specialists, to create high-quality training datasets</li>\n<li>Designing and maintaining robust evaluation frameworks to ensure the reliability and effectiveness of AI models</li>\n<li>Participating in customer engagements, including occasional travel (approximately two weeks per quarter)</li>\n</ul>\n<p>Ideally you&#39;d have:</p>\n<ul>\n<li>A strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience)</li>\n<li>7+ years of post-graduation engineering experience, with demonstrated proficiency in languages such as Python, TypeScript/JavaScript, Java, or C++</li>\n<li>2+ years of experience applying AI/ML in production environments, such as deploying deep learning solutions, building generative/agentic AI applications or setting up evaluations pipelines</li>\n<li>Familiarity with cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure)</li>\n<li>Strong problem-solving skills, with a data-driven approach to iterating on machine learning models and datasets</li>\n<li>Excellent written and verbal communication skills to collaborate effectively in a cross-functional environment</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience working at a startup, particularly as founding engineer</li>\n<li>Experience building and deploying large-scale AI solutions</li>\n<li>Strong written and verbal communication skills to operate in a cross-functional team environment</li>\n<li>Proficiency in Arabic (if focused on language models)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a8d34aff-3e5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4413992005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript/JavaScript","Java","C++","Cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure)"],"x-skills-preferred":["Experience working at a startup, particularly as founding engineer","Experience building and deploying large-scale AI solutions","Strong written and verbal communication skills to operate in a cross-functional team environment","Proficiency in Arabic (if focused on language models)"],"datePosted":"2026-04-18T16:00:59.864Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar; London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript/JavaScript, Java, C++, Cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure), Experience working at a startup, particularly as founding engineer, Experience building and deploying large-scale AI solutions, Strong written and verbal communication skills to operate in a cross-functional team environment, Proficiency in Arabic (if focused on language models)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ff773aa0-c22"},"title":"Software Engineer, Gen AI","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Gen AI Engineering organisation. As a member of our team, you&#39;ll design and implement features end-to-end across front-end, back-end, and infrastructure. You&#39;ll own high-impact technical systems that are critical to Scale&#39;s revenue delivery and collaborate with model teams, Forward Deployed Engineers, and cross-functional stakeholders.</p>\n<p>Our approach is to empower our engineers to take ownership of their work and make decisions that drive results. We believe in a fast-paced, high-ownership environment where you&#39;ll have the opportunity to build and iterate on complex systems that scale to millions of tasks per week.</p>\n<p>In this role, you&#39;ll help shape the engineering culture, values, and best practices of a fast-growing team. You&#39;ll work at the intersection of ML, operations, and analytics to ensure we deliver the highest-quality data at scale.</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $180,000-$225,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ff773aa0-c22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4591300005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["React","Typescript","Node","Python","ML","operations","analytics"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:59.279Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Typescript, Node, Python, ML, operations, analytics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d67909d-97e"},"title":"Senior Machine Learning Engineer - Model Evaluations, Public Sector","description":"<p>The Public Sector ML team at Scale deploys advanced AI systems, including LLMs, agentic models, and multimodal pipelines, into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints.</p>\n<p>As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing and maintaining automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations.</li>\n</ul>\n<ul>\n<li>Designing test datasets and benchmarks to measure generalization, bias, explainability, and failure modes.</li>\n</ul>\n<ul>\n<li>Building evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing.</li>\n</ul>\n<ul>\n<li>Conducting comparative analyses of model architectures, training procedures, and evaluation outcomes.</li>\n</ul>\n<ul>\n<li>Implementing tools for continuous monitoring, regression testing, and quality assurance for ML systems.</li>\n</ul>\n<ul>\n<li>Designing and executing stress tests and red-teaming workflows to uncover vulnerabilities and edge cases.</li>\n</ul>\n<ul>\n<li>Collaborating with operations teams and subject matter experts to produce high-quality evaluation datasets.</li>\n</ul>\n<p>This role requires an active security clearance or the ability to obtain a security clearance.</p>\n<p>Ideal candidates will have experience in computer vision, deep learning, reinforcement learning, or NLP in production settings, strong programming skills in Python, and background in algorithms, data structures, and object-oriented programming.</p>\n<p>Nice to have qualifications include graduate degree in CS, ML, or AI, cloud experience (AWS, GCP), and model deployment experience.</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d67909d-97e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4631848005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$240,450-$300,300 USD (San Francisco, New York, Seattle) $216,300-$269,850 USD (Washington DC, Texas, Colorado, Hawaii)","x-skills-required":["Python","TensorFlow","PyTorch","Computer Vision","Deep Learning","Reinforcement Learning","NLP","Algorithms","Data Structures","Object-Oriented Programming"],"x-skills-preferred":["Graduate Degree in CS, ML, or AI","Cloud Experience (AWS, GCP)","Model Deployment Experience"],"datePosted":"2026-04-18T16:00:58.976Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TensorFlow, PyTorch, Computer Vision, Deep Learning, Reinforcement Learning, NLP, Algorithms, Data Structures, Object-Oriented Programming, Graduate Degree in CS, ML, or AI, Cloud Experience (AWS, GCP), Model Deployment Experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216300,"maxValue":300300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d30384aa-64a"},"title":"Strategic Projects Lead - Coding","description":"<p>Scale&#39;s Generative AI business unit is experiencing historic levels of growth. As a Strategic Projects Lead, you will lead initiatives that drive $XXM+ in new revenue for the business. This is a demanding role, requiring a strong entrepreneurial mindset, comfort with getting into the weeds, and excitement about intense, impactful work that leads to accelerated career progression.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Serve as the full owner of our most visible and high-impact customer pipelines, making decisions that directly impact data quality, operational efficiency, revenue, and margins</li>\n<li>Understand customer requirements and design data taxonomy best suited to improving model performance based on customer needs</li>\n<li>Build out pipeline infrastructure to ensure quality and efficiency</li>\n<li>Train, coach, and manage dynamic and global teams</li>\n<li>Build analytics to make data-driven decisions</li>\n<li>Partner with diverse stakeholders (Engineering + Product + Ops + Go-to-Market) to work on problems that will drive advancements for the largest LLMs in the world</li>\n<li>Give regular progress updates to Scale&#39;s executive team</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>A strong technical background is required for this position. For example, a degree in Machine Learning Engineering, Computer Science, or Software Engineering</li>\n<li>3+ years of experience leading a team/projects, managing operational processes, or 3+ years of experience as a SWE</li>\n<li>Strong problem-solving capabilities in technical environments</li>\n<li>Ability to come up with creative solutions to complex, ambiguous, operational, and technical problems</li>\n<li>Entrepreneurial experience and mindset</li>\n</ul>\n<p>Compensation and Benefits: The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d30384aa-64a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4666036005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$112,000-$190,000 USD","x-skills-required":["Machine Learning Engineering","Computer Science","Software Engineering","SQL","Python","Data Analytics"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:51.409Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning Engineering, Computer Science, Software Engineering, SQL, Python, Data Analytics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":112000,"maxValue":190000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3cc878fa-5d1"},"title":"Infrastructure Software Engineer, Enterprise GenAI","description":"<p>We are seeking a strong engineer to join our team and help us build and scale our core infrastructure in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>\n<p>You will implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers</li>\n<li>Implementing custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>\n<li>Collaborating with platform, product teams and our customers directly to develop and implement innovative infrastructure that scales to meet evolving needs</li>\n<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>4+ years of full-time engineering experience, post-graduation</li>\n<li>Experience scaling products at hyper growth startups</li>\n<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>\n<li>Proficient in Python or Javascript/Typescript, and SQL</li>\n<li>Experience with Kubernetes</li>\n<li>Experience with major cloud providers (AWS, Azure, GCP)</li>\n<li>Excellent communication skills with the ability to explain technical concepts to both technical and non-technical audiences</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3cc878fa-5d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4665557005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$179,400-$224,250 USD","x-skills-required":["Python","Javascript/Typescript","SQL","Kubernetes","GCP","Azure","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:45.380Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Javascript/Typescript, SQL, Kubernetes, GCP, Azure, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":179400,"maxValue":224250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e8abf445-c26"},"title":"Staff Applied AI Engineer, Enterprise GenAI","description":"<p>We&#39;re looking for a Staff Applied AI Engineer to join our Enterprise Engineering team. As an Applied AI Engineer, you&#39;ll work with clients to create ML solutions to satisfy their business needs. Your work will range from building next-generation AI cybersecurity firewalls to creating transformative AI experiences in journalism to applying foundation genomic models making predictions about life-saving drug proteins.</p>\n<p>Daily data-driven experiments will provide key insights around model strengths and inefficiencies which you&#39;ll use to improve your product&#39;s performance. If you are excited about shaping the future of the modern AI movement, we would love to hear from you!</p>\n<p>You will:</p>\n<ul>\n<li>Own, plan, and optimize the AI behind our Enterprise customer&#39;s deepest technical problems</li>\n<li>Leverage SGP to build the most advanced AI agents across the industry including multimodal functionality, tool-calling, and more</li>\n<li>Have experience gathering business requirements and translating them into technical solutions</li>\n<li>Meet regularly with customer teams onsite and virtually, collaborating cross-functionally with all teams responsible for their data and ML needs</li>\n<li>Push production code in multiple development environments, writing and debugging code directly in both our customer&#39;s and Scale&#39;s codebases.</li>\n</ul>\n<p>Ideally you&#39;d have:</p>\n<ul>\n<li>7+ years of full-time engineering experience, post-graduation</li>\n<li>A love for solving deeply complex technical problems with ambiguity using state of the art research and AI to accomplish your client’s business goals</li>\n<li>Strong engineering background: a Bachelor’s degree in Computer Science, Mathematics, or another quantitative field or equivalent strong engineering background.</li>\n<li>Deep familiarity with a data-driven approach when iterating on machine learning models and how changes in datasets can influence model results</li>\n<li>Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment</li>\n<li>Proficiency in Python to write, test and debug code using common libraries (ie numpy, pandas)</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Strong knowledge of software engineering best practices</li>\n<li>Have built applications taking advantage of Generative AI in real, production use cases</li>\n<li>Familiarity with state of the art LLMs and their strengths/weaknesses</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e8abf445-c26","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4683689005","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","Machine Learning","Cloud Technology Stack","Data-Driven Approach","Software Engineering Best Practices"],"x-skills-preferred":["Generative AI","State of the Art LLMs"],"datePosted":"2026-04-18T16:00:44.071Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; Seattle, WA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Cloud Technology Stack, Data-Driven Approach, Software Engineering Best Practices, Generative AI, State of the Art LLMs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_01e7c642-a6c"},"title":"Engagement Manager, Public Sector","description":"<p>We&#39;re hiring an engagement manager to lead and coordinate delivery of agentic workflows for a national security customer. This role is ideal for someone who blends program leadership, customer relationship building, technical fluency, and contract awareness.</p>\n<p>Key responsibilities include: Managing customer relationships from the executive to the end user Working alongside customers to scope agentic workflow use cases that Scale&#39;s engineering team will build and refine Leading a cross-functional project team to deliver on and exceed the customer&#39;s AI/ML objectives Overseeing onboarding and successful implementation of customer accounts</p>\n<p>Must haves: Active TS/SCI clearance 5+ years of work experience succeeding in stakeholder management or a customer-facing role delivering enterprise-scale applications/solutions A track record of structured, analytics-driven problem solving Excellent verbal and written communication skills Willingness to be onsite with the customer in the Colorado Springs area 4 days per week and able to travel at least 25% of the time</p>\n<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_01e7c642-a6c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4667833005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$195,800-$279,400 USD","x-skills-required":["customer relationship building","technical fluency","contract awareness","structured problem solving","excellent communication skills"],"x-skills-preferred":["Python","SQL","API technology","domain expertise"],"datePosted":"2026-04-18T16:00:39.776Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Colorado Springs, CO"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"customer relationship building, technical fluency, contract awareness, structured problem solving, excellent communication skills, Python, SQL, API technology, domain expertise","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":195800,"maxValue":279400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3380b529-407"},"title":"Solutions Engineer, Robotics","description":"<p>The next frontier for AI is the physical world. At Scale, we&#39;re pioneering this shift, moving artificial intelligence from digital spaces into robotics and autonomous systems. Our Robotics team is building the data platform that will power the future of Physical AI.</p>\n<p>We are looking for a pivotal Solutions Engineer to join this team. As a Solutions Engineer, you&#39;ll be a trusted technical partner, building deep relationships with some of the world&#39;s most innovative model builders and renowned robotics companies. You will partner closely with Product, Sales and Machine Learning Engineers to guide prospective customers through the pre-sales process, delivering customized demos and pilots that secure the &#39;technical win.&#39;</p>\n<p>You&#39;ll define customer technical requirements, develop actionable Statements of Work, and collaborate with the delivery team on initial implementation. Your relentless curiosity about customer needs, combined with your expert knowledge of Scale&#39;s products will allow you to design creative and impactful solutions. This is a critical role that directly influences multi-million dollar contracts and initiatives.</p>\n<p>You&#39;ll travel globally to conduct on-site technical workshops and scope new projects, while also leading demos and pilots for new prospects. You&#39;ll be part of a tight-knit, specialized team, influencing a rapidly growing business that is expanding into new product areas.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Partner with Scale Account Executives and Engagement Managers to deliver new customer pilots and grow technical relationships with existing clients.</li>\n</ul>\n<ul>\n<li>Work with Product Engineering and Product Management to influence our product roadmap based on your frontline insights.</li>\n</ul>\n<ul>\n<li>Become a domain expert in next-generation Robotics and physical AI (e.g. VLMs, VLAs, World Models).</li>\n</ul>\n<ul>\n<li>Be accountable for the technical customer experience and commercial growth, expanding relationships and use cases with existing customers.</li>\n</ul>\n<ul>\n<li>Collaborate with highly technical engineers at our customer sites to ensure satisfaction with our data, software platforms, and workflows.</li>\n</ul>\n<ul>\n<li>Design and develop playbooks, demos, and other tools to ensure efficient and successful pilots and customer expansions.</li>\n</ul>\n<ul>\n<li>Pioneer the development of a global Robotics Data Marketplace, actively seeking out and engaging with key international partners to build a comprehensive data ecosystem.</li>\n</ul>\n<ul>\n<li>Evangelize Scale by interacting with customers at major industry events and academic conferences.</li>\n</ul>\n<p>You have:</p>\n<ul>\n<li>A strong engineering background, preferably in Robotics, Mechatronics, Computer Science, Mathematics, or other Engineering fields.</li>\n</ul>\n<ul>\n<li>3+ years of experience developing with Python, C++, Java, and/or other scripting languages.</li>\n</ul>\n<ul>\n<li>Hands-on experience in Robotics and Physical AI.</li>\n</ul>\n<ul>\n<li>Exceptional project management and interpersonal skills, strong attention to detail, and a strong sense of ownership.</li>\n</ul>\n<ul>\n<li>The presentation skills and technical credibility to speak confidently with a variety of stakeholders, from executives to front-line engineers.</li>\n</ul>\n<ul>\n<li>A high level of comfort communicating effectively across internal and external organizations.</li>\n</ul>\n<ul>\n<li>Regular travel within the Bay Area.</li>\n</ul>\n<ul>\n<li>International travel approximately once every two months.</li>\n</ul>\n<ul>\n<li>Intellectual curiosity, empathy, and the ability to operate with a high degree of autonomy.</li>\n</ul>\n<p>Bonus points if you have:</p>\n<ul>\n<li>Prior sales, solutions engineering, or partnership experience with a track record of successfully achieving quota.</li>\n</ul>\n<ul>\n<li>Ideally would have experience selling complex technical solutions to enterprises with deal sizes of $500K to $5M+.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3380b529-407","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4640096005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$151,800-$189,750 USD","x-skills-required":["Python","C++","Java","Robotics","Physical AI","Project management","Interpersonal skills","Presentation skills","Technical credibility"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:31.745Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C++, Java, Robotics, Physical AI, Project management, Interpersonal skills, Presentation skills, Technical credibility","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":151800,"maxValue":189750,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_94999453-111"},"title":"Senior Full-Stack Software Engineer, (Forward Deployed), GPS","description":"<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>\n<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>\n<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>\n<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>\n<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>\n<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>\n<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>\n<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>\n<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>\n<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>\n<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>\n<li>Driven async communication methodologies to reduce communication frictions</li>\n</ul>\n<p><strong>Nice to Haves</strong></p>\n<ul>\n<li>Proficient in Arabic</li>\n<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>\n<li>Experience working cross functionally with operations</li>\n<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_94999453-111","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4676608005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","TypeScript","Next.js","Python","Node.js","PostgreSQL","MongoDB","Docker","Kubernetes","Azure","AWS","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:24.081Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dubai, UAE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61e346b2-915"},"title":"Sr. Software Engineer, Inference","description":"<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p>Representative projects across the org:</p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>\n<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61e346b2-915","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5152348008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£225,000-£325,000 GBP","x-skills-required":["High-performance, large-scale distributed systems","Implementing and deploying machine learning systems at scale","Load balancing, request routing, or traffic management systems","LLM inference optimization, batching, and caching strategies","Kubernetes and cloud infrastructure (AWS, GCP)","Python or Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:17.377Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":225000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1be1fd1e-8f3"},"title":"Principal Architect","description":"<p>We are seeking a Principal Architect to drive the design, development, and deployment of our agentic AI products in a fast-paced, collaborative environment. In this role, you will lead a team of 50+ engineers, providing both strategic and technical guidance. You’ll be responsible for high-impact architectural decisions, cross-company collaboration, and executive level engagements.</p>\n<p>Key Responsibilities: Lead and mentor a high-performing engineering team of 50+, fostering a culture of technical excellence and ownership. Guide your team through complex challenges involving LLMs, AI agents, and large-scale distributed systems. Represent Scale AI in high-stakes negotiations and strategic discussions with senior external partners, demonstrating strong technical competence and credibility. Develop and communicate a compelling vision for Scale AI’s technology applied to your program. Provide regular updates to senior leadership and key stakeholders on progress, risks, and opportunities. Foster a culture of speed, unity of purpose, resilience, and teamwork.</p>\n<p>Requirements: 10+ years of software engineering experience, including 5+ years in a technical leadership or staff role. Deep understanding of modern AI/ML technologies, including experience working with LLMs and AI agents. Proficient in one or more modern programming languages (Python, JavaScript/TypeScript). Hands-on experience with Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Strong product and business sense, with a track record of aligning engineering efforts with company goals. Ability to operate effectively in ambiguous, fast-changing environments and guide your team to do the same. Experience in executive level engagement with industry partners and Public Sector customers</p>\n<p>Success Metrics: Within 6 months: Successful demonstration of agentic AI’s mission value in high-stakes customer demonstrations Establish Scale AI as the preferred agentic AI partner for the PEO Establish high velocity, agile engineering cadence both internally and with our industry partners</p>\n<p>Within 12–18 months: Secure follow-on contract award with expanded scope for Scale Position Scale AI as the global AI leader in this mission area Establish developed solutions as Scale product offerings to deliver on future contracts</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1be1fd1e-8f3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4599202005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$257,000-$321,000 USD","x-skills-required":["software engineering","technical leadership","AI/ML technologies","LLMs","AI agents","Kubernetes","cloud infrastructure","Python","JavaScript/TypeScript"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:16.791Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, technical leadership, AI/ML technologies, LLMs, AI agents, Kubernetes, cloud infrastructure, Python, JavaScript/TypeScript","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":257000,"maxValue":321000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_740da2af-174"},"title":"Security Engineer, Detection & Response","description":"<p>We are seeking a Senior Security Engineer with a specialty in Detection and Incident Response to join our Security Engineering team. This role sits at the intersection of security operations and software engineering, requiring you to investigate incidents and build the systems that detect, contain, and prevent them.</p>\n<p>You will design and ship high-precision detections across cloud services and enterprise SaaS, develop automation that shortens response timelines, and mature the telemetry pipelines that make it all possible. Your ability to write production-quality code is just as important as your ability to triage an alert.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Engineer, test, and deploy detection logic across cloud and enterprise environments, treating detections as software with version control, peer review, and measurable performance.</li>\n</ul>\n<ul>\n<li>Build and maintain incident response automation, runbooks, and tooling that reduce containment timelines without sacrificing developer velocity.</li>\n</ul>\n<ul>\n<li>Mature telemetry pipelines through improved schema design, normalization, enrichment, and quality checks that reduce false positives and increase signal fidelity.</li>\n</ul>\n<ul>\n<li>Perform digital incident investigations to identify and contain potential security breaches.</li>\n</ul>\n<ul>\n<li>Conduct digital forensics and malware analysis to understand attack vectors and adversary methodologies.</li>\n</ul>\n<ul>\n<li>Integrate alerting with messaging and ticketing systems to enable fast, traceable response workflows.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with IT, security, and engineering teams to harden identity and access patterns, close logging and forensics gaps, and implement maintainable guardrails that scale with the organisation.</li>\n</ul>\n<ul>\n<li>Utilize threat intelligence platforms to improve hunting, detection, and response workflows.</li>\n</ul>\n<ul>\n<li>Clearly explain the significance and impact of incidents, providing actionable recommendations to both technical and non-technical stakeholders.</li>\n</ul>\n<p>Ideal Candidate:</p>\n<ul>\n<li>5+ years of experience in Detection Engineering, Incident Response, or Security Operations, with a strong emphasis on building and shipping security tooling and automation.</li>\n</ul>\n<ul>\n<li>Proficiency in at least one programming language (e.g., Python, Go) and comfort writing production-grade code , not just scripts.</li>\n</ul>\n<ul>\n<li>Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments.</li>\n</ul>\n<ul>\n<li>Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically.</li>\n</ul>\n<ul>\n<li>Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs.</li>\n</ul>\n<ul>\n<li>Familiarity with digital forensics tools and malware analysis techniques.</li>\n</ul>\n<ul>\n<li>Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate.</li>\n</ul>\n<ul>\n<li>Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows.</li>\n</ul>\n<ul>\n<li>Strong communication skills, with the ability to translate complex security findings into clear business impact.</li>\n</ul>\n<ul>\n<li>Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA) are a plus.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_740da2af-174","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4684073005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$237,600-$297,000 USD","x-skills-required":["Detection Engineering","Incident Response","Security Operations","Cloud Services","Enterprise SaaS","Automation","Telemetry Pipelines","Digital Forensics","Malware Analysis","Threat Intelligence Platforms","SIEM","EDR","SOAR","Cloud-Native Environments","Programming Languages","Python","Go"],"x-skills-preferred":["Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments","Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically","Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs","Familiarity with digital forensics tools and malware analysis techniques","Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate","Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows","Strong communication skills, with the ability to translate complex security findings into clear business impact","Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA)"],"datePosted":"2026-04-18T16:00:14.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA; Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Detection Engineering, Incident Response, Security Operations, Cloud Services, Enterprise SaaS, Automation, Telemetry Pipelines, Digital Forensics, Malware Analysis, Threat Intelligence Platforms, SIEM, EDR, SOAR, Cloud-Native Environments, Programming Languages, Python, Go, Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments, Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically, Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs, Familiarity with digital forensics tools and malware analysis techniques, Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate, Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows, Strong communication skills, with the ability to translate complex security findings into clear business impact, Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":237600,"maxValue":297000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a400e696-2d2"},"title":"Staff Software Engineer, Enterprise GenAI","description":"<p>We&#39;re seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. As a Staff Software Engineer, you will own large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>\n<li>Working across the entire product lifecycle from conceptualization through production</li>\n<li>Being able, and willing, to multi-task and learn new technologies quickly</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>7+ years of full-time engineering experience, post-graduation</li>\n<li>Experience scaling products at hyper growth startups</li>\n<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>\n<li>Proficient in Python or Javascript/Typescript, and SQL</li>\n<li>Experience with Kubernetes</li>\n<li>Experience with major cloud providers (AWS, Azure, GCP)</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a400e696-2d2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4569678005","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$248,400-$310,500 USD","x-skills-required":["Python","Javascript/Typescript","SQL","Kubernetes","AWS","Azure","GCP"],"x-skills-preferred":["LLMs","vector databases"],"datePosted":"2026-04-18T16:00:11.482Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":248400,"maxValue":310500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0cd9dcc6-813"},"title":"Solutions Engineer, Enterprise","description":"<p>We&#39;re looking for a Solutions Engineer to join our Enterprise team. As a Solutions Engineer, you will play a vital role in the development of AI applications. You will partner closely with Account Executives, Product, and Machine Learning Engineers to lead prospective customers through pre-sales, delivering customized demos and pilots to secure the &#39;technical win&#39;. You will scope customer technical requirements and develop an actionable Statement of Work. You will work closely with the delivery team to help with initial implementation.</p>\n<p>Key responsibilities include: Partner with Scale AEs on the customer journey, delivering tailored demos and prototypes according to the customer&#39;s requirements. Develop technical domain expertise in Generative AI / large language model applications for Enterprise use cases, including customers in financial services, insurance, SaaS, and similar enterprises. Be accountable for securing the &#39;technical win&#39; by unblocking technical challenges Interact with customers daily to understand their needs and design solutions to better serve them. Design and develop &#39;Scopes of Work&#39; by breaking down customer challenges into a project plan Work closely with forward-deployed Software and Machine learning Engineers to develop agents in the initial post-sales stage Work with AEs and PMs to identify customer-specific feature requests. Drive strategic initiatives to improve the efficiency and effectiveness of the Solution Engineering team.</p>\n<p>Ideal candidate will have: Strong engineering background with prior experience working with clients in a pre or post-sales capacity to realise business goals. Prior experience developing with Python, Java and/or other web development languages. Experience working in enterprise SaaS, cloud tech, finance, fintech or similar industries in a technical capacity with end-customer engagement. A track record as a self-starter, motivated to independently unblock technical issues in the field with the customer, away from the mothership. Presentation skills with a high degree of technical credibility when speaking with executives and front-line engineers. High level of comfort communicating effectively across internal and external organisations. Intellectual curiosity, empathy, and ability to operate with high velocity.</p>\n<p>Nice to have: GenAI Experience Forward deployed engineering experience Machine Learning Experience</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0cd9dcc6-813","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4642876005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Generative AI","large language model applications","Python","Java","web development languages","enterprise SaaS","cloud tech","finance","fintech"],"x-skills-preferred":["GenAI Experience","Forward deployed engineering experience","Machine Learning Experience"],"datePosted":"2026-04-18T16:00:07.677Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Generative AI, large language model applications, Python, Java, web development languages, enterprise SaaS, cloud tech, finance, fintech, GenAI Experience, Forward deployed engineering experience, Machine Learning Experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f7c042aa-57b"},"title":"Program Manager (Homeland Layered Defense), Public Sector","description":"<p>We&#39;re hiring a Program Manager to lead and own execution for Scale&#39;s portfolio of clients charged with a layered defense for the United States.</p>\n<p>This role is ideal for someone who blends program leadership, systems-building obsession, technical fluency , and who thrives in fast-moving, ambiguous, and mission-driven environments.</p>\n<p>As a Program Manager, you will:</p>\n<p>Serve as the contractual and internal program lead across a large-scale public sector engagement, integrating multiple sub-efforts under one umbrella (e.g., data pipelines for computer vision, software deployments for GenAI solutions) Own relationships with senior government stakeholders and ensure that we meet contractual obligations, performance metrics, and reporting requirements Own relationships with subcontractors to ensure successful delivery of the prime contract Lead formal engagements such as contract reviews, customer program syncs, performance assessments, and reporting cycles Lead and supervise a team of 3-4 delivery managers who run day-to-day customer engagement efforts Oversee a technical delivery team that will drive product roadmap alignment that adapts evolving customer needs</p>\n<p>Must haves:</p>\n<p>An active TS/SCI clearance 10+ years professional experience, ideally managing complex technical programs for DoD or national security customers A proven ability to lead large, cross-functional efforts across delivery, contracts, and customer teams A healthy obsession with establishing and driving adoption of programmatic systems and processes; plus the ability to exert influence and manage associated change Familiarity with FAR-based contracts and program structures like IDIQs, task orders, and OTAs Strong business acumen with the ability to manage performance metrics, contract deliverables, and program budgets A track record of high ownership in ambiguous environments Excellent written and verbal communication skills , especially with senior government stakeholders Deep curiosity about agentic AI and its application to homeland layered defense A willingness to travel up to 25% of the time</p>\n<p>We have a diverse team with a variety of skill sets, many have:</p>\n<p>Proficiency in Python, SQL or other programming languages A proven track record in B2B + B2G client facing roles and expanding client relationships Prior experience delivering technical solutions to government customers Domain expertise in a relevant field (e.g. modeling and simulation, joint planning processes, intelligence workflows, etc)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f7c042aa-57b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4667857005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$232,800-$291,000 USD","x-skills-required":["TS/SCI clearance","10+ years professional experience","program management","technical leadership","business acumen","communication skills","agentic AI","homeland layered defense"],"x-skills-preferred":["Python","SQL","programming languages","B2B + B2G client facing roles","technical solutions delivery","domain expertise"],"datePosted":"2026-04-18T16:00:07.619Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TS/SCI clearance, 10+ years professional experience, program management, technical leadership, business acumen, communication skills, agentic AI, homeland layered defense, Python, SQL, programming languages, B2B + B2G client facing roles, technical solutions delivery, domain expertise","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":232800,"maxValue":291000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56e29c57-cd1"},"title":"Robotics Technician","description":"<p>We&#39;re seeking a Robotics Technician to join our team in Mexico City. As a key contributor, you will partner with cross-functional stakeholders to bring up new robots and productionize the maintenance of robots and collection hardware. You will play a critical role in supporting the day-to-day operations of the factory by bringing up and maintaining robots and collection hardware. You will also provide technical support for data collection operations, manage physical inventory, maintain equipment, and coordinate logistics.</p>\n<p>You will become a subject matter expert on all capabilities of the robotics platforms deployed in the factory. You will develop technical domain expertise in areas of 2D and 3D imaging and annotation, multi-sensor fusion and calibration, GPS/INS navigation systems, computer vision, and other autonomy-adjacent concepts.</p>\n<p>You have a Bachelor&#39;s degree or industry experience, an engineering background, preferably in Computer Science, Mathematics, or other Engineering fields. You have 2+ years of experience developing with Python, C++, Java, and/or other scripting languages. You have 1-3 years of experience in hardware labs or a manufacturing environment. You have experience managing risk and operating robots safely. You have strong project management and interpersonal skills, high attention to detail, and a strong sense of ownership. You have a high level of comfort communicating effectively across internal and external organizations.</p>\n<p>Nice to have: hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, and ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, and experience in logistics.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56e29c57-cd1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4635128005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","C++","Java","Robotics","AI","Computer Vision","Multi-sensor fusion and calibration","GPS/INS navigation systems"],"x-skills-preferred":["hands-on experience in Robotics, AI, and/or Computer Vision","intellectual curiosity","empathy","ability to operate with a high degree of autonomy","experience building and/or maintaining lab networks and data pipelines","experience running large-scale data collection and controlled experiments","experience building out facilities","experience in logistics"],"datePosted":"2026-04-18T16:00:01.904Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mexico City, MX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C++, Java, Robotics, AI, Computer Vision, Multi-sensor fusion and calibration, GPS/INS navigation systems, hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, experience in logistics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_44975b06-cb1"},"title":"Senior Full-Stack Software Engineer, (Forward Deployed), GPS","description":"<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>\n<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>\n<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>\n<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>\n<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_44975b06-cb1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4673310005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","TypeScript","Next.js","Python","Node.js","PostgreSQL","MongoDB","Docker","Kubernetes","Azure","AWS","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:59.289Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd00b53a-6fa"},"title":"Software Engineer, Enterprise AI","description":"<p>We are seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>\n<p>You will be responsible for owning large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Owning large new areas within our product</li>\n<li>Working across backend, frontend, and interacting with LLMs and ML models</li>\n<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>\n<li>Working across the entire product lifecycle from conceptualization through production</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>4+ years of full-time engineering experience, post-graduation</li>\n<li>Experience scaling products at hyper growth startups</li>\n<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>\n<li>Proficient in Python or Javascript/Typescript, and SQL</li>\n<li>Experience with Kubernetes</li>\n<li>Experience with major cloud providers (AWS, Azure, GCP)</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd00b53a-6fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4513943005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$179,400-$224,250 USD","x-skills-required":["Python","Javascript/Typescript","SQL","Kubernetes","AWS","Azure","GCP"],"x-skills-preferred":["LLMs","vector databases","AI technologies"],"datePosted":"2026-04-18T15:59:58.329Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":179400,"maxValue":224250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901202b0-bfa"},"title":"Product Security Engineer - Public Sector","description":"<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>\n<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>\n<p>You will:</p>\n<ul>\n<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>\n<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>\n<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>\n<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>\n<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>\n<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>\n<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>\n<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>\n</ul>\n<p>Ideally, you’d have:</p>\n<ul>\n<li>Proven experience as a Security Engineer with a focus on product security.</li>\n<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>\n<li>Strong understanding of modern Javascript application design.</li>\n<li>Production experience with Kubernetes backed services</li>\n<li>Hands-on experience with SAST and DAST tools and methodologies.</li>\n<li>Familiarity with terraform orchestration for infrastructure management.</li>\n<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>\n<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>\n<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>\n<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of Washington DC/Hawaii is: $205,700-$257,400 USD</p>\n<p>The base salary range for this full-time position in the location of St. Louis/Suffolk is: $171,600-$214,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901202b0-bfa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4651559005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,700-$257,400 USD (Washington DC/Hawaii), $171,600-$214,500 USD (St. Louis/Suffolk)","x-skills-required":["TypeScript","Python","Kubernetes","CI/CD","SAST","DAST","terraform orchestration"],"x-skills-preferred":["NodeJS","modern Javascript application design","Kubernetes backed services","SAST and DAST tools and methodologies","terraform orchestration for infrastructure management"],"datePosted":"2026-04-18T15:59:56.896Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"St. Louis, MO; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, terraform orchestration, NodeJS, modern Javascript application design, Kubernetes backed services, SAST and DAST tools and methodologies, terraform orchestration for infrastructure management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":171600,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_94b058de-e59"},"title":"Solutions Engineer, Enterprise","description":"<p>We are seeking a Solutions Engineer to join our team. As a Solutions Engineer, you will play a vital role in the development of AI applications. You will partner closely with Account Executives, Product, and Machine Learning Engineers to lead prospective customers through pre-sales, delivering customized demos and pilots to secure the “technical win”. You will scope customer technical requirements and develop an actionable Statement of Work. You will work closely with the delivery team to help with initial implementation.</p>\n<p>You will be accountable for securing the “technical win” by unblocking technical challenges. You will interact with customers daily to understand their needs and design solutions to better serve them. You will design and develop “Scopes of Work” by breaking down customer challenges into a project plan. You will work closely with forward-deployed Software and Machine learning Engineers to develop agents in the initial post-sales stage. You will work with Account Executives and Project Managers to identify customer-specific feature requests. You will drive strategic initiatives to improve the efficiency and effectiveness of the Solution Engineering team.</p>\n<p>Ideally, you&#39;d have a strong engineering background with prior experience working with clients in a pre or post-sales capacity to realise business goals. You should have prior experience developing with Python, Java and/or other web development languages. You should have experience working in enterprise SaaS, cloud tech, finance, fintech or similar industries in a technical capacity with end-customer engagement. You should have a track record as a self-starter, motivated to independently unblock technical issues in the field with the customer, away from the mothership. You should have presentation skills with a high degree of technical credibility when speaking with executives and front-line engineers. You should have a high level of comfort communicating effectively across internal and external organisations. You should have intellectual curiosity, empathy, and ability to operate with high velocity.</p>\n<p>Nice to haves include GenAI Experience, Forward deployed engineering experience, and Machine Learning Experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_94b058de-e59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4554440005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["Python","Java","web development languages","GenAI","Machine Learning","enterprise SaaS","cloud tech","finance","fintech"],"x-skills-preferred":["GenAI Experience","Forward deployed engineering experience","Machine Learning Experience"],"datePosted":"2026-04-18T15:59:54.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, web development languages, GenAI, Machine Learning, enterprise SaaS, cloud tech, finance, fintech, GenAI Experience, Forward deployed engineering experience, Machine Learning Experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45fc6ed2-285"},"title":"Senior Full-Stack Software Engineer, (Forward Deployed), GPS","description":"<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>\n<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>\n<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>\n<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>\n<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>\n<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45fc6ed2-285","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4676606005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","TypeScript","Next.js","Python","Node.js","PostgreSQL","MongoDB","Docker","Kubernetes","Azure","AWS","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:52.395Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_14499a71-fa9"},"title":"Software Engineer, Enterprise","description":"<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>\n<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>\n<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>\n<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>\n</ul>\n<ul>\n<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>\n</ul>\n<ul>\n<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>\n</ul>\n<ul>\n<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>\n</ul>\n<ul>\n<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>\n</ul>\n<ul>\n<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>\n</ul>\n<ul>\n<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>\n</ul>\n<ul>\n<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>\n</ul>\n<ul>\n<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>\n</ul>\n<ul>\n<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>\n</ul>\n<ul>\n<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>\n</ul>\n<ul>\n<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>\n</ul>\n<ul>\n<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>\n</ul>\n<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_14499a71-fa9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4536653005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript","FastAPI","Flask","Express","NestJS","AWS","Azure","Kubernetes","Docker","Terraform","PostgreSQL","DynamoDB","GenAI","Model Integration","AI Agent Systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:48.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b88c5c28-661"},"title":"Solutions Engineer (Clearance Required)","description":"<p>Our customer base is growing exponentially, and you will be on the front lines of ensuring that the world&#39;s most innovative companies become Scale customers.</p>\n<p>As a Solutions Engineer, you will be part of helping shape our early-stage federal business by re-envisioning our commercial product offerings for our federal clients.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Becoming an expert on the end-to-end of Scale Products</li>\n<li>Creating tailored demonstrations and collateral for federal stakeholders at both the executive and analyst level</li>\n<li>Partnering with Scale Account Executives to deliver customer pilots according to requirements agreed by the customer</li>\n<li>Integrating and ingesting a variety of external datasets to solve government use cases</li>\n<li>Interacting with customers on a day-to-day basis to understand their pain points and design solutions</li>\n<li>Working with internal product and engineering teams to turn customer requirements into Scale capabilities</li>\n<li>Understanding public sector mission sets and strategic objectives to better showcase Scale&#39;s products</li>\n</ul>\n<p>Ideal candidates will have:</p>\n<ul>\n<li>A strong engineering background, preferably in computer science, mathematics, or other quantitative fields</li>\n<li>Strong communication skills, ability to interact with both technical and non-technical customers at all levels</li>\n<li>At ease with technology, able to quickly pick up new tech stacks and troubleshoot</li>\n<li>Previous experience working with Public Sector customers</li>\n<li>Proficiency in scripting languages such as Python, Javascript/Typescript, Bash scripts, or programming languages</li>\n<li>A strong desire to roll up your sleeves and help build a business in an extremely fast-paced environment</li>\n<li>Active US Government Security Clearance (TS / SCI required)</li>\n<li>Based in the Washington, DC area or willing to relocate</li>\n<li>Background working in AI/ML, particularly Generative AI and Large Language Models</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b88c5c28-661","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4663481005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$186,400-$233,000 USD","x-skills-required":["engineering background","computer science","mathematics","quantitative fields","scripting languages","Python","Javascript/Typescript","Bash scripts","programming languages","US Government Security Clearance","AI/ML","Generative AI","Large Language Models"],"x-skills-preferred":["communication skills","ability to interact with technical and non-technical customers","at ease with technology","previous experience working with Public Sector customers"],"datePosted":"2026-04-18T15:59:46.105Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering background, computer science, mathematics, quantitative fields, scripting languages, Python, Javascript/Typescript, Bash scripts, programming languages, US Government Security Clearance, AI/ML, Generative AI, Large Language Models, communication skills, ability to interact with technical and non-technical customers, at ease with technology, previous experience working with Public Sector customers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":186400,"maxValue":233000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6365e7d7-511"},"title":"Senior Forward Deployed Data Scientist/Engineer","description":"<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>\n<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>\n<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>\n<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>\n<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>\n<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>\n<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>Salary Range: $167,200-$209,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6365e7d7-511","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4636227005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$167,200-$209,000 USD","x-skills-required":["Python","SQL","Modern data/AI tooling","Statistics","Experimentation","Causal reasoning","Measurement","Data science","Machine learning","Quantitative engineering"],"x-skills-preferred":["Experience in a forward deployed, solutions, consulting, or other client-facing technical role","Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products","Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow","Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery","Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems","Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling","Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign"],"datePosted":"2026-04-18T15:59:44.618Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":167200,"maxValue":209000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6f598f99-758"},"title":"Senior+ Software Engineer, Research Tools","description":"<p>We&#39;re looking for a Senior+ Software Engineer to join our Research Tools team. As a member of this team, you&#39;ll build the infrastructure and applications that enable our researchers to iterate quickly, run complex experiments, and extract insights from frontier AI systems.</p>\n<p>This role sits at the intersection of product thinking and full-stack engineering. You&#39;ll work directly with researchers and engineers to deeply understand their workflows, identify bottlenecks, and rapidly ship solutions that multiply their productivity. Whether you&#39;re building human feedback interfaces for model evaluation, creating platforms for experiment orchestration, or developing novel visualization tools for understanding model behavior, your work will directly accelerate our mission to build safe, reliable AI systems.</p>\n<p>We&#39;re looking for someone who can operate with high agency in an ambiguous environment,someone who can be dropped into a research team, quickly develop domain expertise, and independently drive impactful projects from conception to delivery.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and maintain full-stack applications and infrastructure that researchers use daily to conduct experiments, collect feedback, and analyze results</li>\n<li>Partner closely with research teams to understand their workflows, pain points, and requirements, translating these into technical solutions</li>\n<li>Design intuitive interfaces and abstractions that make complex research tasks accessible and efficient</li>\n<li>Create reusable platforms and tools that accelerate the development of new research applications</li>\n<li>Rapidly prototype and iterate on solutions, gathering feedback from users and refining based on real-world usage</li>\n<li>Take ownership of complete product areas, from understanding user needs through design, implementation, and ongoing iteration</li>\n<li>Contribute to technical strategy and architectural decisions for research tooling</li>\n<li>Mentor other engineers and help establish best practices for research application development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of software engineering experience with a strong focus on full-stack development</li>\n<li>Excel at rapid iteration and shipping,you can move from concept to working prototype quickly</li>\n<li>Have experience building tools, platforms, or infrastructure for technical users (engineers, researchers, data scientists, analysts, etc.)</li>\n<li>Demonstrate high agency and ability to operate independently in ambiguous environments</li>\n<li>Can quickly develop deep understanding of complex technical domains</li>\n<li>Have strong product instincts and can identify the right problems to solve</li>\n<li>Are proficient with modern web technologies (React, TypeScript, Python, etc.)</li>\n<li>Have a track record of building user-facing applications that are actually used and loved by their target audience</li>\n<li>Communicate effectively with both technical and non-technical stakeholders</li>\n<li>Care about the societal impacts of your work and are motivated by Anthropic&#39;s mission</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience building research tools, scientific software, or experimentation platforms</li>\n<li>Background in machine learning, AI research, or working closely with ML researchers</li>\n<li>Founded or been an early engineer at a startup, particularly one focused on developer or researcher tools</li>\n<li>Built open-source tools or platforms with active user communities</li>\n<li>Experience with data visualization, interactive interfaces, or novel interaction paradigms</li>\n<li>Contributed to engineering platforms or internal tooling at scale (similar to Heroku, Vercel, or other platform-as-a-service products)</li>\n<li>Experience leveraging AI/LLMs to build more powerful or efficient tools</li>\n<li>Previous work in creative tools, artist tools, or other domains requiring deep user empathy</li>\n<li>Domain knowledge in areas like human-computer interaction, systems safety, or AI alignment</li>\n</ul>\n<p>Annual compensation range for this role is $300,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6f598f99-758","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4981828008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["full-stack development","modern web technologies","React","TypeScript","Python","rapid iteration","shipping","product instincts","problem-solving","communication","societal impacts"],"x-skills-preferred":["research tools","scientific software","experimentation platforms","machine learning","AI research","open-source tools","data visualization","interactive interfaces","novel interaction paradigms","engineering platforms","internal tooling","AI/LLMs","creative tools","artist tools","human-computer interaction","systems safety","AI alignment"],"datePosted":"2026-04-18T15:59:42.608Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"full-stack development, modern web technologies, React, TypeScript, Python, rapid iteration, shipping, product instincts, problem-solving, communication, societal impacts, research tools, scientific software, experimentation platforms, machine learning, AI research, open-source tools, data visualization, interactive interfaces, novel interaction paradigms, engineering platforms, internal tooling, AI/LLMs, creative tools, artist tools, human-computer interaction, systems safety, AI alignment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_853e1417-019"},"title":"Solutions Architect, Applied AI (National Security)","description":"<p>As a Solutions Architect, Applied AI (National Security), you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping national security and defense agencies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.</p>\n<p>You will combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.</p>\n<p>Working closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation</li>\n</ul>\n<ul>\n<li>Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams &amp; stakeholders to drive customer success</li>\n</ul>\n<ul>\n<li>Support customers building with Claude Code, the Claude API, and Claude for Enterprise</li>\n</ul>\n<ul>\n<li>Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering &amp; development teams up to business value focused conversations with executives</li>\n</ul>\n<ul>\n<li>Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack</li>\n</ul>\n<ul>\n<li>Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases</li>\n</ul>\n<ul>\n<li>Identify common integration patterns and contribute insights back to our Product and Engineering teams</li>\n</ul>\n<ul>\n<li>Travel frequently to customer sites for workshops, technical deep dives, and relationship building</li>\n</ul>\n<ul>\n<li>Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>TS/SCI clearance required</li>\n</ul>\n<ul>\n<li>Must have prior experience working with US national security (defense and/or intelligence) agencies</li>\n</ul>\n<ul>\n<li>5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager</li>\n</ul>\n<ul>\n<li>Experience navigating complex buying cycles involving multiple stakeholders</li>\n</ul>\n<ul>\n<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>\n</ul>\n<ul>\n<li>Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders</li>\n</ul>\n<ul>\n<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>\n</ul>\n<ul>\n<li>Familiar with Python</li>\n</ul>\n<ul>\n<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>\n</ul>\n<ul>\n<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>A love of teaching, mentoring, and helping others succeed</li>\n</ul>\n<ul>\n<li>Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>\n</ul>\n<p>The annual compensation range for this role is $240,000-$270,000 USD.</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different:</p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_853e1417-019","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5079511008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$240,000-$270,000 USD","x-skills-required":["TS/SCI clearance","Prior experience working with US national security (defense and/or intelligence) agencies","Technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager","Experience navigating complex buying cycles involving multiple stakeholders","Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders","Experience designing scalable cloud architectures and integrating with enterprise systems","Familiar with Python","Familiarity with common LLM frameworks and tools or a background in machine learning or data science"],"x-skills-preferred":["Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more","Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities","A love of teaching, mentoring, and helping others succeed","Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders","Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems"],"datePosted":"2026-04-18T15:59:41.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TS/SCI clearance, Prior experience working with US national security (defense and/or intelligence) agencies, Technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager, Experience navigating complex buying cycles involving multiple stakeholders, Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders, Experience designing scalable cloud architectures and integrating with enterprise systems, Familiar with Python, Familiarity with common LLM frameworks and tools or a background in machine learning or data science, Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more, Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities, A love of teaching, mentoring, and helping others succeed, Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders, Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc75c6b0-4db"},"title":"Machine Learning Fellow - Human Frontier Collective (Canada)","description":"<p>This is a fully remote, 1099 independent contractor opportunity with an estimated duration of six months and the potential for extension.</p>\n<p>As an HFC Fellow, you&#39;ll apply your academic and professional expertise to help design, evaluate, and interpret advanced generative AI systems.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging in high-impact projects with partnered AI labs and platforms</li>\n<li>Designing, reviewing, and optimising PyTorch models</li>\n<li>Evaluating complex ML code and AI-generated implementations for efficiency and correctness</li>\n<li>Advising on GPU optimisation, scaling, and trade-offs</li>\n</ul>\n<p>You&#39;ll also become part of a supportive, interdisciplinary network of innovators and thought leaders committed to advancing frontier AI across domains.</p>\n<p>Collaboration with Scale&#39;s research team to co-author technical reports and research papers is also expected.</p>\n<p>To be eligible, candidates must be authorised to work in Canada and have a PhD or postdoctoral degree in Computer Science, Computer Engineering, or a related field.</p>\n<p>Professional background as a Machine Learning Engineer or Data Scientist with 1-3+ years of experience is also required.</p>\n<p>Strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow) is essential, along with experience with cloud infrastructure (AWS) and MLOps tools (Docker, Langchain).</p>\n<p>A detail-oriented, innovative thinker with a passion in applied AI research and a commitment to collaboration is ideal.</p>\n<p>Flexible schedule with 10–40 hour weeks that fit around your life and other commitments is offered.</p>\n<p>Project pay rates vary across platforms and are depending on a number of factors, including but not limited to; projects, scope, skillset, and location.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc75c6b0-4db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Human Frontier Collective","sameAs":"https://humanfrontiercollective.com/","logo":"https://logos.yubhub.co/humanfrontiercollective.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4661650005","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"contract","x-salary-range":null,"x-skills-required":["Python","PyTorch","TensorFlow","AWS","Docker","Langchain"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:39.412Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, AWS, Docker, Langchain"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b68ff4cc-e74"},"title":"Data Engineer, Safeguards","description":"<p><strong>About the role</strong></p>\n<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>\n<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>\n<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>\n<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>\n<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>\n<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>\n<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>\n<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>\n<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>\n<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>\n<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>\n<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>\n<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>\n<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>\n<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>\n<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>\n<li>Care about the societal impacts of AI and are motivated by safety work</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>\n<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>\n<li>Built data infrastructure that supports ML model monitoring or evaluation</li>\n<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>\n<li>Developed internal tooling or self-service analytics platforms</li>\n</ul>\n<p><strong>Strong candidates need not have:</strong></p>\n<ul>\n<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>\n<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>\n<li>Previous experience at an AI safety or research organization</li>\n<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b68ff4cc-e74","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156057008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"£170,000-£220,000 GBP","x-skills-required":["SQL","Python","ETL/ELT pipelines","dbt","Airflow","Spark","cloud data platforms","BigQuery","Redshift","Snowflake","Looker","Tableau","Metabase"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:33.960Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":220000,"unitText":"YEAR"}}}]}