<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>9753936d-967</externalid>
      <Title>Lead Security Engineer</Title>
      <Description><![CDATA[<p>At bunq, we&#39;re not just building a banking app; we&#39;re reshaping how people around the world experience financial freedom.</p>
<p>As our Lead Security Engineer, you are the digital guardian of our bank. You&#39;ll lead the charge in protecting our users and our data from an ever-evolving landscape of cyber threats, ensuring our platform remains a fortress of trust.</p>
<p>You will play a critical role in strengthening and defending our digital environment. You will lead a team of highly skilled security professionals, making bunq safer for users and employees globally.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the SecOps team responsible for detecting, investigating, and resolving security events, owning the end-to-end security posture of bunq.</li>
<li>Working together with our CISO to define our security roadmap by identifying gaps and risks, then driving the implementation of new tools and measures to mitigate those threats.</li>
<li>Managing and hardening our core corporate infrastructure, including G-suite, AWS, Okta, and our fleet of Apple endpoints.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Experience leading a small, hands-on team of Security Engineers, and you aren&#39;t afraid to get your hands dirty.</li>
<li>Extensive, practical experience with SOC processes, incident response, and SIEM software.</li>
<li>Deep knowledge of security best practices for both cloud and corporate IT environments.</li>
<li>Hands-on experience managing and securing G-suite, Okta, AWS, Apple endpoints, and device management software (preferably Kandji).</li>
<li>Fluency in English - able to communicate effectively in a global team, ensuring collaboration and clarity across all project stages.</li>
</ul>
<p>We give you the space and the tools you need to succeed. Our benefits include:</p>
<ul>
<li>Great, international colleagues who share your mindset</li>
<li>Hybrid setup: after 3 months in-office, work 2 days remote, 3 days in-office weekly.</li>
<li>Digital Nomad Program: After your first year, enjoy up to 20 days per year to work while traveling, combining flexibility with strong team collaboration</li>
<li>We reward tenure with a dedicated travel budget: €1.5k after 2 years and €3k after 4 years to visit another core office.</li>
<li>We support growth with bunq Academy and €1500 annual learning budget</li>
<li>Massive discount with Urban Sports Club</li>
<li>Travel expenses are covered whether you come walking or by bike, bus or car (though we prefer green choices)</li>
<li>A MacBook so you can Get Shit Done with us</li>
<li>Delicious lunches from our fabulous in-house chefs with vegan and vegetarian options</li>
<li>An optional pension plan with monthly contribution from bunq</li>
<li>Monthly contribution to your phone and internet bills</li>
<li>Friday drinks and other celebrations - bunq style</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SOC processes, incident response, SIEM software, G-suite, AWS, Okta, Apple endpoints, device management software, Kandji</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>bunq</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.bunq.com.png</Employerlogo>
      <Employerdescription>bunq is a digital banking app that provides financial services to individuals and businesses.</Employerdescription>
      <Employerwebsite>https://careers.bunq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.bunq.com/o/lead-security-engineer</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-19</Postedate>
    </job>
    <job>
      <externalid>61234903-9fa</externalid>
      <Title>Engineering Manager (Java or Typescript) - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>
<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>
<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>
<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>
<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>
</ul>
<ul>
<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>
</ul>
<ul>
<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>
</ul>
<ul>
<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>
</ul>
<ul>
<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>
</ul>
<ul>
<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>
</ul>
<ul>
<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>
</ul>
<ul>
<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>
</ul>
<ul>
<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>
</ul>
<ul>
<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>
</ul>
<ul>
<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>
</ul>
<ul>
<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>
</ul>
<ul>
<li>Take full responsibility for delivering impactful features to millions of users annually.</li>
</ul>
<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience building and implementing backend services and/or frontend applications.</li>
</ul>
<ul>
<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>
</ul>
<ul>
<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>
</ul>
<ul>
<li>Love for building world-class products with a great user experience.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search and booking services for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/1558189</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b93800dd-3d2</externalid>
      <Title>Production Engineering Support Manager – Liquidity Provision Technology</Title>
      <Description><![CDATA[<p>We are seeking a Production Engineering Support Manager to join our team. As a Production Engineering Support Manager, you will provide leadership and guidance to coach, motivate and lead team members to their optimum performance levels and career development. You will solve technical trading-related issues, independently where possible or leveraging teammates as necessary. You will escalate to application and/or infrastructure subject matter experts (internally or at vendors) when appropriate. You will manage communications to the trading staff and internal stakeholders, primarily our execution services team regarding issue/resolution.</p>
<p>Collaborate with other technical support engineers who may need assistance working on an issue; utilize his/her area of expertise to help others in order to quickly facilitate solutions for the customer. Build and foster working relationships with trading groups with a focus on execution services team. Work with global counterparts to provide seamless 24/7 global coverage.</p>
<p>Trading Infrastructure / Platform Status Communications – disseminate messages to the appropriate trading staff regarding trading infrastructure / platform issues, exchange updates, etc. Uplift environment management tools to reduce risk and streamline efficiency of support team. Assist with automating processes to achieve efficiency and streamlined trade support.</p>
<p>Document and create new knowledge base to provide the most effective solutions to trading issues. Deployment of, support of, and monitoring of the firm’s internal trading systems. Coordinate with vendors, internal application owners, infrastructure owners, and tech support to ensure trading platforms are correctly installed, configured, and tested.</p>
<p>Liaise with development and infrastructure teams, prioritize tool enhancements, and coordinate and participate in software/new version releases. Liaise with Dev and Infrastructure teams to coordinate and participate in software/new version releases.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Linux, shell scripting, python, SQL, financial technology, FIX protocols, AI technologies, version control systems, SDLC processes, columnar database, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides liquidity provision technology. It operates globally.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953129734</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bd2d1b2-84f</externalid>
      <Title>Senior Machine Learning Researcher</Title>
      <Description><![CDATA[<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>
<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>
<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>
<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>
</ul>
<ul>
<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>
</ul>
<ul>
<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>
</ul>
<ul>
<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>
</ul>
<ul>
<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>
</ul>
<ul>
<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>
</ul>
<p>Required Skills &amp; Qualifications:</p>
<ul>
<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>
</ul>
<ul>
<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>
</ul>
<ul>
<li>Hands-on experience with data processing, experimentation, and exploration.</li>
</ul>
<ul>
<li>Strong programming skills in Python.</li>
</ul>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and attention to detail.</li>
</ul>
<ul>
<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Ability to work independently and collaboratively.</li>
</ul>
<p>Additional Preferred Skills &amp; Qualifications:</p>
<ul>
<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>
</ul>
<ul>
<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>
</ul>
<ul>
<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>
</ul>
<ul>
<li>Previous experience working in an Agile environment.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company focuses on artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954012324</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78270c8d-016</externalid>
      <Title>Operations Data Governance &amp; Controls Specialist</Title>
      <Description><![CDATA[<p>As an Operations Control Specialist – Data Governance &amp; Controls, you will design, implement, and support technical data governance solutions with a focus on the firm&#39;s Trader Master and related reference data domains.</p>
<p>This role requires a strong technical background in Data Management, Data Architecture, Data Lineage, Data Quality, Master Data Management (MDM), and automation within Financial Services and/or Technology.</p>
<p>You will contribute to and help lead the technical design of data governance controls, data models, and integration patterns, partnering closely with Technology and Operations teams.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build/enhance data governance frameworks, controls, standards, and workflows (policies, definitions, entitlements).</li>
<li>Create data quality rules and monitoring; automate exception detection, alerting, remediation, SLAs, and RCA.</li>
<li>Develop Python/SQL/ETL-ELT automation for checks, controls, and reporting; deliver Tableau/Power BI dashboards and KPIs.</li>
<li>Contribute to conceptual/logical/physical data modeling for Trader Master and core domains.</li>
<li>Support MDM capabilities: golden record, matching/merging, survivorship, stewardship workflows; help shape MDM strategy.</li>
<li>Implement access/entitlement governance (RBAC, row/column security) across DB/warehouse/BI with audit compliance.</li>
<li>Maintain catalog, glossary, lineage, schema history, impact analysis; manage structured change workflows.</li>
<li>Define integration patterns (batch/API/streaming) and build reconciliations/validations across systems.</li>
<li>Manage historical/temporal data (validation, backfills, remediation) supporting regulatory/reporting/analytics.</li>
<li>Produce technical documentation (designs, runbooks, data dictionaries), share knowledge, and mentor juniors.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, Information Systems, Mathematics, Finance, or related field; advanced degree (MS, MBA, or equivalent) is a plus.</li>
<li>5–8 years of experience in financial services or fintech with hands-on work in data engineering, data management, or data architecture roles; exposure to trading strategies, fund structures, and financial products strongly preferred.</li>
</ul>
<p>Technical Expertise (Required):</p>
<ul>
<li>Strong Python and SQL; experience with data warehousing + ETL/ELT.</li>
<li>Familiarity with MDM/data governance tools (e.g., Collibra, Informatica, Alation) and Tableau/Power BI.</li>
<li>Proven ability to lead delivery, solve complex data issues, and communicate with technical/non-technical stakeholders.</li>
<li>Preferred certs: DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $70,000 to $160,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$70,000 to $160,000</Salaryrange>
      <Skills>Python, SQL, ETL/ELT, Data Warehousing, Tableau/Power BI, MDM/data governance tools, Collibra, Informatica, Alation, DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Ops &amp; MO Control</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Ops &amp; MO Control provides data governance and control services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954926796</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af8ed06d-a9a</externalid>
      <Title>Forward Deployed Software Engineer - Equities Technology</Title>
      <Description><![CDATA[<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>
<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>
<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>
<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>
<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>
<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides technology solutions to the financial services industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953439247</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07c95966-8e7</externalid>
      <Title>Backend Developer - Host Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Backend Developer and become part of the team that brings new vacation rental properties to life on Holidu.</p>
<p>You&#39;ll be working at the heart of our property acquisition engine , where we take hosts from their very first sign-up all the way to their first booking, making that journey as fast and seamless as possible.</p>
<p>This team sits at a uniquely strategic intersection of product and growth. You will build and optimize the systems that every new host flows through: from onboarding and listing creation, to property configuration, content quality, and referral programs.</p>
<p>The work demands reliability and attention to detail , because the time between a host signing up and welcoming their first guest, and how well their property performs from day one, is directly shaped by the quality of what you build.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and contribute to shaping the team&#39;s direction as you grow.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Early experience delivering product impact through engineering , you&#39;ve shipped things that real users depend on.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS, Kubernetes, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, connecting hosts with millions of guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2589679</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5c70414d-4e6</externalid>
      <Title>Full‑Stack data engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly self-sufficient, motivated engineer with strong full-stack data engineering skills to join our team. This is a remote/offshore role that requires autonomy, excellent communication, and the ability to deliver high-quality work with limited supervision while collaborating with a predominantly US-based team.</p>
<p>You will build reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting,working end-to-end from data ingestion and transformation through to UI. Our Python-based data platform is undergoing a major evolution toward a modern, cloud-native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment.</p>
<p>This role will be pivotal in independently designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives,delivering clean, reliable, and well-modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely (primarily via remote channels) with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.</p>
<p>Key Responsibilities</p>
<ul>
<li>Remote collaboration &amp; communication: Operate effectively as an offshore member of a distributed team, proactively communicating status, risks, and blockers across time zones and coordinating overlap with US working hours as needed.</li>
</ul>
<ul>
<li>Full-stack data engineering: Build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites, delivering production-grade solutions with minimal hand-holding.</li>
</ul>
<ul>
<li>Autonomous delivery &amp; ownership: Take end-to-end ownership of features and projects,clarifying requirements, breaking work into milestones, estimating timelines, and delivering high-quality, well-documented solutions.</li>
</ul>
<ul>
<li>Specification and design: Translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable technical specifications and design documents.</li>
</ul>
<ul>
<li>Code quality: Write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.</li>
</ul>
<ul>
<li>Continuous improvement: Contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements, continuously suggesting better tools, patterns, and automation.</li>
</ul>
<p>Required Skills &amp; Experience</p>
<ul>
<li>Professional experience: 5+ years in software engineering, with a full-stack background building complex, scalable data-engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.</li>
</ul>
<ul>
<li>Modern data engineering: Strong experience with medallion data architecture patterns using data warehouse technologies (e.g., Snowflake), data transformation tooling (e.g., dbt), BI tooling, and NoSQL data marts (e.g., Elasticsearch/OpenSearch).</li>
</ul>
<ul>
<li>Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.</li>
</ul>
<ul>
<li>Remote work &amp; autonomy: Proven track record working in a remote or distributed environment, demonstrating self-motivation, reliable execution, and the ability to make sound technical decisions independently.</li>
</ul>
<ul>
<li>Agile methodology: Working knowledge of Agile development practices and workflows (e.g., sprint planning, stand-ups, retrospectives) in a distributed team setting.</li>
</ul>
<ul>
<li>Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.</li>
</ul>
<p>Preferred Skills &amp; Experience</p>
<ul>
<li>Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.</li>
</ul>
<ul>
<li>Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.</li>
</ul>
<ul>
<li>Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.</li>
</ul>
<ul>
<li>Front-end expertise: Experience building user interfaces with Angular or a modern UI stack.</li>
</ul>
<ul>
<li>Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Snowflake, dbt, AWS, Terraform, modern UI technologies, data warehouse technology, SQL, unit testing, CI/CD automation, quality assurance processes, machine learning, AI, large language models, agentic frameworks, ELK stack, search and analytics solutions, cloud expertise, AWS cloud services, SageMaker, CI/CD tooling, front-end expertise, Angular, financial domain knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides risk management solutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955321460</Applyto>
      <Location>Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>87749959-700</externalid>
      <Title>Intern Data Engineering (all genders)</Title>
      <Description><![CDATA[<p>Join our Data Engineering team inside the Business Intelligence department, where you&#39;ll work with experienced engineers to build the data foundation that powers Holidu&#39;s growth.</p>
<p>As an intern, you&#39;ll get hands-on experience with real problems and have the opportunity to make a meaningful impact. You&#39;ll work on building and supporting data pipelines, digging into data quality, getting hands-on with cloud infrastructure, and exploring AI-assisted development.</p>
<p>Our team uses a range of technologies, including Redshift, Athena, DuckDB, Terraform, Docker, Jenkins, ELK, Grafana, Looker, OpsGenie, Kafka, Airbyte, and Fivetran. You&#39;ll have the chance to learn from experienced engineers and contribute to the development of our data systems.</p>
<p>In this role, you&#39;ll be part of a team that genuinely loves what they do and is passionate about building a better data foundation for Holidu. You&#39;ll have the opportunity to take responsibility from day one and develop through regular feedback.</p>
<p>We offer a fair salary, the chance to make a difference for hundreds of thousands of monthly users, and the opportunity to grow and develop through regular feedback. You&#39;ll also have access to a range of benefits, including a hybrid work policy, the chance to work from other local offices, and a corporate subscription to Urban Sports Club or a premium gym membership at a discounted rate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>intern</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Git, Airflow, dbt, Docker, Cloud platform (AWS, GCP, etc.), LLM tools, AI-assisted coding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2557398</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c16f4e7-af6</externalid>
      <Title>AI Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced AI Engineer to join our core AI engineering team. The successful candidate will be responsible for building and maintaining AI products that ingest unstructured contracts, extract key terms into structured data, and provide a front-end with monitoring and controls for day-to-day operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Build the core application and workflow agents for Market Data Operations in Python; integrate with AWS and internal systems like the Market Data Warehouse.</li>
<li>Ingest and understand contracts at scale, using LLMs to extract costs, fee schedules, entitlements, renewal terms, and payment details.</li>
<li>Connect the dots between contracts, entitlements, invoices, and payments so Ops, Legal, and Finance can see a single &#39;source of truth&#39; and catch issues early.</li>
<li>Design and tune LLM workflows (prompt engineering, tool/MCP integration, structured outputs) for contract Q&amp;A, summarization, and exception flagging.</li>
<li>Own monitoring and controls for the AI system: logging, metrics, guardrails, and human-in-the-loop review to keep performance, reliability, and quality high.</li>
<li>Work directly with stakeholders (Market Data Ops, analysts, Legal, Finance/AP) to understand their workflows and quickly iterate on features that actually get used.</li>
</ul>
<p>Required Skills &amp; Experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>5+ years of professional experience with Python, including building production services (Django, Flask, or FastAPI).</li>
<li>Experience working with unstructured documents (contracts, PDFs, legal docs) and turning them into structured data.</li>
<li>Prompt engineering and working with structured JSON outputs</li>
<li>Comfort wiring models into real applications (tool/MCP-style integrations, APIs).</li>
<li>Experience using cloud platform, ideally AWS.</li>
<li>Able to define and track quantitative metrics for AI features (accuracy, latency, cost, etc.).</li>
<li>Strong communication skills and comfortable working directly with non-technical users.</li>
<li>Enjoys a start-up-like environment inside a large firm: small team, high ownership, fast iteration.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience building AI solutions in financial services, especially around market data, vendor management, or legal/contract workflows.</li>
<li>Familiarity with entitlements/governance and large internal data platforms (e.g., a Market Data Warehouse).</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, AWS, LLMs, Structured JSON outputs, Cloud platform, Quantitative metrics, Strong communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT - Artificial Intelligence is a technology company that specializes in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955349680</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3b63dd5-0f6</externalid>
      <Title>Backend utvecklare</Title>
      <Description><![CDATA[<p>We are seeking an experienced backend developer to join our tech team. As a backend developer, you will be responsible for designing, developing, and maintaining the server-side of our applications and systems. You will work closely with our frontend developers, designers, and product owners to ensure a seamless integration between frontend and backend.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop scalable and efficient backend solutions for our digital platforms.</li>
<li>Write clean, readable, and reusable code.</li>
<li>Perform unit testing and debugging to ensure high quality and reliability.</li>
<li>Participate in technical discussions and contribute ideas to improve the product&#39;s performance and functionality.</li>
<li>Collaborate with frontend developers and other team members to ensure a smooth user experience.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience in backend development with a focus on web applications.</li>
<li>Good knowledge of programming languages such as Python, Java, or similar.</li>
<li>Experience working with frameworks such as Django, Flask, Spring, or similar.</li>
<li>Familiarity with database management systems such as MySQL, PostgreSQL, or similar.</li>
<li>Knowledge of API design and implementation.</li>
<li>Strong problem-solving skills and ability to work independently as well as in a team.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Attractive salary based on experience and competence.</li>
<li>Opportunity to work with exciting projects and the latest technology.</li>
<li>Flexible working hours and possibility of remote work.</li>
<li>Continuous professional development and opportunities for career growth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>On-site</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend development, web applications, Python, Java, Django, Flask, Spring, MySQL, PostgreSQL, API design, problem-solving, cloud services, AWS, Google Cloud, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Transportation</Industry>
      <Employername>Scandinavian Airlines</Employername>
      <Employerlogo>https://logos.yubhub.co/scandinavianairlines.teamtailor.com.png</Employerlogo>
      <Employerdescription>Scandinavian Airlines is an airline company that operates flights across the world.</Employerdescription>
      <Employerwebsite>https://scandinavianairlines.teamtailor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://scandinavianairlines.teamtailor.com/jobs/4882026-backend-utvecklare</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad717304-da7</externalid>
      <Title>Intern Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>
<p>This internship provides a great opportunity to gain hands-on experience into Data Analytics. You will work alongside a team of highly skilled and dedicated professionals who are committed to offering strong mentorship and guidance to help you start your career in the field of data.</p>
<p>Duration: 6 months. Location: Munich, 2-3 office days per week.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git, Atlassian.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Analytics Intern at Holidu, you’ll help our company make smarter, data-driven decisions, while being supported by a Senior Analyst.</p>
<p>This role goes beyond building dashboards. We want curious, proactive people who want to become data advisors - not only delivering reports, but understanding the business context, which questions they answer and why they matter.</p>
<ul>
<li>Collect, analyse, and interpret large datasets to help solve real business challenges.</li>
<li>Build dashboards and reports using tools like SQL, Python, and Looker.</li>
<li>Collaborate closely with teams such as Product, Marketing, or Finance to help them extract actionable insights from data.</li>
<li>Build and improve data pipelines using cutting-edge technologies.</li>
<li>We are an AI-first team. Rather than manually executing repetitive tasks, you will use AI to work smarter and automate workflows.</li>
<li>You’ll collaborate with our Data Scientists and get exposure to:</li>
<li>Data preparation and exploratory data analysis.</li>
<li>How ML-models are built, evaluated, and deployed in real-life.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>Currently enrolled in or recently completed a Bachelor’s or Master’s degree in a quantitative field (e.g., Business Analytics, Data Science, Economics, Statistics, Mathematics, Engineering or similar).</li>
<li>Understanding of SQL and Python, proficiency in Excel/Google Sheets and a desire to learn visualization tools like Looker.</li>
<li>Knowledge of Machine Learning and Statistical models is a plus.</li>
<li>Strong analytical and problem-solving skills, and attention to detail.</li>
<li>Curiosity to learn and a passion for solving data problems.</li>
<li>Good communication and presentation skills.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Compensation: Get a fair salary.</li>
<li>Impact: Make a difference for hundreds of thousands of monthly users.</li>
<li>Growth: Take responsibility from day one and develop through regular feedback.</li>
<li>Community: Engage with international, diverse, yet like-minded colleagues through regular events and 2 office days per week with your team.</li>
<li>Flexibility: Benefit from our hybrid work policy and the chance to work from other local offices for up to 8 weeks a year.</li>
<li>Fitness: Get a Urban Sports Club corporate subscription or a premium gym membership at a discounted rate.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>intern</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Looker, Git, Atlassian, Airflow, DBT, AWS Stack, Redshift, Athena, Glue, S3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search and recommendation services for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2556233</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc9213ff-135</externalid>
      <Title>(Senior) Team Lead Marketing Analytics (all genders)</Title>
      <Description><![CDATA[<p>Within the Marketing Technology department, we are building a new Marketing Analytics team and are looking for a Team Lead to shape it from the ground up.</p>
<p>You&#39;ll work closely with a wide range of Marketing stakeholders, ensuring they have the data, tools, and insights they need to drive sustainable growth. Moreover, you will also collaborate with data scientists and data engineers within the department to build best-in-class analytical solutions.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git, Jira, Confluence, Slack.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>You&#39;ll be leading data analysts and collaborating cross-functionally with data engineers and data scientists - fostering collaboration, learning, and analytical excellence.</li>
<li>Engage with senior marketing leadership on strategic projects, providing insights that influence channel strategy and budget decisions, and ultimately our revenue growth.</li>
<li>Translate marketing logic, for a diverse range of channels (e.g. Performance Marketing, SEO, CRM, affiliate) and use cases into analytical requirements and communicate complex findings clearly to both technical and commercial teams.</li>
<li>Support and partner with Marketing Technology on tracking, event design, and data flows to ensure data quality and reliable reporting frameworks.</li>
<li>Not shying away from hands-on work as an individual contributor (50% to start), while leading the team, diving deep into the details when needed.</li>
<li>Shape the future of marketing analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>5+ years multi-channel marketing analytics experience in a B2B or B2C organisation where marketing is a core performance driver, with extensive hands-on expertise in at least one of the following: attribution, cost and revenue allocation, or bidding.</li>
<li>People management experience - this should not be your first leadership role.</li>
<li>A collaborative mindset with clear experience communicating with executive stakeholders and senior decision makers.</li>
<li>You are mission-driven, with a working backwards mentality (i.e. starting with customer needs) and clear experience managing and delivering complex projects with multiple stakeholders. Ability to translate business goals into analytical solutions and break down complex topics into actionable insights.</li>
<li>Excellent analytical and technical skills. Concretely: strong in SQL, Python (or similar), data visualisation skills as well as developing technical frameworks to serve a clear business need.</li>
<li>A strong personal or team focus on AI enablement: you actively use AI tools to enhance your coding, planning, and workflows, and can enable your team to do the same.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS Stack, Airflow, DBT, Looker, SQL, Python, Git, Jira, Confluence, Slack</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that provides a search engine for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2458940</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af78786b-a0a</externalid>
      <Title>Software Engineer - Compliance / Regulatory Reporting</Title>
      <Description><![CDATA[<p>The Compliance/Regulatory Reporting technology team at Millennium builds solutions to meet the firm&#39;s global regulatory and reporting obligations.</p>
<p>We use AI-assisted development tools (e.g., Claude Code), cloud-native/serverless architectures on AWS, and modern full-stack technologies (C#, Angular, SQL), with a strong focus on Domain-Driven Design (DDD) and automated testing.</p>
<p>The role is suited to engineers who have delivered real-time, mission-critical systems in high trade volume, distributed and fault-tolerant environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable, real-time Regulatory/Compliance applications using C#/.NET, Angular, and SQL, leveraging AI-assisted tools to accelerate development and improve quality.</li>
<li>Model business domains using DDD (bounded contexts, aggregates, entities, value objects, domain services, domain events) with a strong focus on business correctness and ubiquitous language.</li>
<li>Architect and implement cloud-native/serverless solutions on AWS, including:</li>
<li>Event-driven services using AWS Lambda and messaging/streaming (Kafka, SQS, SNS).</li>
<li>Containerized microservices using Docker and Kubernetes (e.g., Amazon EKS).</li>
<li>Build and maintain Angular front-ends that integrate securely and efficiently with backend APIs and domain services.</li>
<li>Design and optimize relational data models and SQL queries (SQL Server, Snowflake) for high-volume, low-latency workloads.</li>
<li>Drive a test-first mindset with strong automated test coverage (unit, integration, contract, and end-to-end) for critical domain workflows and controls.</li>
<li>Collaborate with global business and Compliance stakeholders to understand requirements, shape domain models, and deliver auditable, production-ready solutions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Core Engineering &amp; Full-Stack Skills</li>
</ul>
<p>+ Practical experience with AI-assisted tools (e.g., Claude Code, GitHub Copilot) for code generation/refactoring, test creation, debugging, and documentation   + Expert-level C#/.NET and strong object-oriented design skills   + Solid experience building Angular applications (components, state, routing, API integration)   + Advanced SQL skills for schema design and complex queries (SQL Server, Snowflake)   + Experience with high-throughput, concurrent/multithreaded systems   + Kafka or similar messaging experience, including using JSON and Avro for data contracts in streaming and messaging   + Strong understanding of unit testing, Dependency Injection, design patterns, concurrency, and SOLID principles   + Experience with Git and GitHub in a collaborative, code-review-driven workflow</p>
<ul>
<li>Soft Skills &amp; Domain Knowledge</li>
</ul>
<p>+ Excellent analytical and problem-solving abilities.   + Self-starter who thrives in a fast-paced, globally distributed environment.   + Strong written and verbal communication skills with the ability to explain domain models, testing strategies, and architectural decisions to varied audiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-assisted tools, C#/.NET, Angular, SQL, Domain-Driven Design, AWS, Kafka, Git, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology builds solutions to meet global regulatory and reporting obligations.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955321458</Applyto>
      <Location>Singapore, Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64bb6566-575</externalid>
      <Title>Senior ‘Developer Infrastructure’ Engineer</Title>
      <Description><![CDATA[<p>The GALAXY Platform Execution &amp; Exchange Data (SPEED) Team is a core part of Millennium&#39;s technology organisation, powering the firm&#39;s lowest-latency solutions for systematic and high-frequency trading.</p>
<p>SPEED delivers the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>
<p>As a Senior Developer Infrastructure Engineer on SPEED, you will own and evolve the build and CI/CD infrastructure that underpins these mission-critical systems.</p>
<p>By designing scalable build pipelines, shared tooling, and reliable release workflows, you will directly enhance developer productivity and enable fast, safe iteration on some of the firm&#39;s most performance-sensitive code.</p>
<p>This role offers the opportunity to shape core engineering practices while contributing to platforms that are central to Millennium&#39;s trading edge.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain a highly scalable, parallel, and cached build system for a large, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Own and continually optimise CI/CD pipelines to minimise build/test times, reduce flakiness, and improve developer productivity.</li>
</ul>
<ul>
<li>Operate with an AI-first mindset across the SDLC, using automation by default to streamline build, test, and release workflows.</li>
</ul>
<ul>
<li>Integrate and operationalise AI tools (e.g., copilots, workflow automation, AI-driven analytics) to eliminate manual toil, accelerate development, and codify reusable AI-enabled patterns for the broader engineering organisation.</li>
</ul>
<ul>
<li>Design and operate containerised environments (e.g., Docker, Kubernetes) to maximise utilisation, reliability, and scalability across environments.</li>
</ul>
<ul>
<li>Implement and manage artifact storage, dependency management, and versioning strategies for large, distributed systems.</li>
</ul>
<ul>
<li>Develop and maintain shared libraries, CLIs, scripts, and internal platforms that reduce friction and enable self-service for engineers.</li>
</ul>
<ul>
<li>Build and enhance test suites and environment provisioning, leveraging AI and automation where appropriate for smarter checks, triage, and observability.</li>
</ul>
<ul>
<li>Monitor, instrument, and improve the reliability, observability, and performance of build and CI/CD systems using metrics, dashboards, and alerting.</li>
</ul>
<ul>
<li>Partner with trading and engineering teams to understand requirements, remove friction, and champion best practices for building, testing, and releasing software.</li>
</ul>
<p>Qualifications/Skills Required</p>
<ul>
<li>5+ years of software engineering or DevInfra/Platform/DevOps experience, with significant focus on building systems and CI/CD.</li>
</ul>
<ul>
<li>Strong programming skills in one or more languages (e.g., Python, Rust, Go, C++) for automation and tooling.</li>
</ul>
<ul>
<li>Hands-on experience with at least one modern build system (e.g., Bazel, Buck2).</li>
</ul>
<ul>
<li>Solid understanding of source control (Git), branching strategies, and release management.</li>
</ul>
<ul>
<li>Experience with monorepos is a plus.</li>
</ul>
<ul>
<li>Experience scaling build and test infrastructure for growing codebases and teams (parallelization, test sharding, remote execution, caching).</li>
</ul>
<ul>
<li>Experience designing or participating in processes, systems, or playbooks that leverage AI to streamline work rather than needing to add more headcount to the team.</li>
</ul>
<ul>
<li>Familiarity with containers and cloud infrastructure (Docker, Kubernetes, and major cloud providers such as AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Strong communication and collaboration skills; comfortable partnering with multiple teams and driving cross-cutting initiatives.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Rust, Go, C++, Bazel, Buck2, Git, Kubernetes, Docker, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a company that provides equities, quant strategies, and shared services technology.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954695574</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6690b2fa-cab</externalid>
      <Title>(Senior) Team Lead Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>
<p>This role is on-site in Munich, with two office days per week.</p>
<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>
<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>
<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>
<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>
<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>
<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>
<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>
<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack</Skills>
      <Category>Technology</Category>
      <Industry>Travel Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2598226</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9bb11411-3a5</externalid>
      <Title>Full Stack Developer – Reference Data</Title>
      <Description><![CDATA[<p>We are seeking a skilled Full Stack Developer to enhance our Enterprise Reference Data platform, the central source of financial data across the firm.</p>
<p>The successful candidate will play a key role in evolving our data platform, services, and tools to meet new customer requirements. The platform is built on a modern tech stack, including Java, Kafka, and AWS (EKS), Angular, offering scalable and streaming capabilities.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop and maintain full-stack solution using Java (Spring Framework, GraphQL, Rest API, Kafka) and Angular.</li>
<li>Ensure proper ingestion, curation, storage, and management of data to meet business needs.</li>
<li>Write and execute unit, performance, and integration tests.</li>
<li>Collaborate with cross-functional teams to solve complex data challenges.</li>
<li>Closely work with users to gather the requirements and convert them into an actionable plan.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Minimum of 5-7 years&#39; of professional Java development experience, focusing on API-and Kafka based architectures.</li>
<li>Minimum 4-5 years of strong Angular development skills with backend integration expertise.</li>
<li>Hands-on experience with automated testing (unit, performance, integration).</li>
<li>5+ years of database development experience (any RDBMS)</li>
<li>Analytical and problem-solving skills with the ability to work independently in a fast-paced environment.</li>
<li>Excellent communication skills to effectively collaborate with users and other teams across different regions.</li>
<li>Self-motivated and capable of working under pressure.</li>
<li>Experience working in Financial Services or a Front Office Environment is highly preferred.</li>
<li>Experience working in Reference Data domain a plus</li>
<li>Familiarity with AI developer tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kafka, Angular, Spring Framework, GraphQL, Rest API, AWS (EKS), database development, automated testing, unit testing, performance testing, integration testing, AI developer tools, Financial Services, Front Office Environment, Reference Data domain</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955407188</Applyto>
      <Location>Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7e58f60-5fa</externalid>
      <Title>Software Engineer - Learning Engineering and Data (LEaD) Program</Title>
      <Description><![CDATA[<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>
<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>
<p>Candidate is expected to:</p>
<ul>
<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>
<li>Take part in the development and enhancement of the backend distributed system</li>
<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>2-5 years of experience working with C++, Python, or Java</li>
<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>
<li>Must be comfortable working in both Unix/Linux and Windows environments</li>
<li>Good understanding of various design patterns</li>
<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>
<li>Solid communication skills</li>
<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>
<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>
</ul>
<p>Desirable Skills/Knowledge:</p>
<ul>
<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>
<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>
<li>Hands-on experience building ML and data pipeline architectures</li>
<li>Understanding of distributed messaging systems</li>
<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>
<li>Experience with relational and non-relational database platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master&apos;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>IT LEad Program</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a large global alternative investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953879362</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25fd58ed-3c0</externalid>
      <Title>(Senior) Data Scientist (all genders)</Title>
      <Description><![CDATA[<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>
<p>As a Senior Data Scientist, you will work on various topics such as rankings, recommendations, user segmentation, user lifetime value, business forecasts, etc. You will have access to our huge dataset and work in collaboration with stakeholders from various departments.</p>
<p>Your objective is to build the best internal and external products for our customers. Holidu highly values a diverse and open environment with people from all over the world.</p>
<p>This role is based in Munich with a hybrid setup.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Flexible data science environment (Python, Sagemaker)</li>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>You will play a pivotal role in the Business Intelligence team alongside data scientists, analysts, and engineers. Together, you will lead the development and enhancement of our company-wide machine learning strategy.</p>
<ul>
<li>Collaborate across various business departments to identify opportunities and solve critical business challenges using data science solutions.</li>
<li>Build and optimize predictive models such as booking cancellation forecasts, churn predictions, pricing optimization, revenue forecasting and marketing channel allocation.</li>
<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>
<li>Interface with diverse business stakeholders, ensuring alignment between data science initiatives and company goals.</li>
<li>Demonstrate leadership in data science projects, leveraging your expertise to drive measurable business impact.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>3+ years of experience as a Data Scientist, with a proven track record of applying data science methodologies to solve complex business problems.</li>
<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>
<li>Expertise in statistics, predictive analytics, machine learning techniques, and proficiency in tools like Python and SQL.</li>
<li>Experience with Airflow and dbt is a plus.</li>
<li>Strong understanding of business operations and experience collaborating with diverse stakeholders.</li>
<li>Enthusiasm for data science and a drive to deliver world-class products that make a difference.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Sagemaker, AWS Stack, Airflow, DBT, Looker, SQL, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that provides a search engine for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2555141</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6964b8e4-caf</externalid>
      <Title>Cybersecurity Engineer</Title>
      <Description><![CDATA[<p>Job Title: Cybersecurity Engineer</p>
<p>Introduction to role</p>
<p>Cybersecurity sits at the heart of our IT strategy. As we move towards ambitious objectives, we are looking for individuals who focus on innovation to maintain a sustainable risk position against an evolving threat landscape, who recognise that adversaries may include organised crime syndicates or state-sponsored attackers, and who understand attackers&#39; motivations and ways of working.</p>
<p>In this role, you will operate within AstraZeneca&#39;s global cybersecurity organisation, collaborating with and influencing multiple functions across China, India, Mexico, Sweden, the US and the UK. Ready to help defend a global enterprise where technology directly supports life-changing medicines?</p>
<p>Accountabilities</p>
<p>In this role, you will engineer cybersecurity solutions across cloud, on-premises and third-party collaboration environments, with a predominant focus on cloud and data. You will collaborate with other teams to perform, assess and evolve IT processes that intersect our cybersecurity priorities, ensuring security is embedded into how work gets done. You will map governance and compliance frameworks and their controls to technical implementation, shifting hardening processes as far left as possible in the lifecycle. You will leverage deep understanding of threats, weaknesses and vulnerabilities around cloud and data to help other areas respond promptly and effectively to contain breaches or address areas of concern. You will also contribute to continuous improvement by analysing incidents, refining standards and influencing architectural decisions that balance risk, performance and usability.</p>
<p>How will you use your expertise to raise the bar?</p>
<p>Essential Skills/Experience</p>
<ul>
<li>Minimum 10 years of experience</li>
<li>Bachelor&#39;s Degree</li>
<li>Must have broad enterprise IT experience with significant cloud and data exposure.</li>
<li>Must have in-depth understanding of security and networking protocols, cryptography, and modern authentication and authorization protocols.</li>
<li>Must have experience designing, deploying, and operating secure networks, systems, application and security architectures at scale.</li>
<li>Must have experience configuring and managing cloud security services in an AWS, Azure and GCP at organisation at scale.</li>
<li>Must have experience researching, designing, and implementing security policies, standards, and procedures, including those in cybersecurity frameworks such as MITRE ATT&amp;CK, NIST CSF, NIST SP.800- 53, and NIST SP.800-61, as well as implementing cloud security reference architectures.</li>
<li>Should have experience working in a software development and systems administration organisation, implementing DevSecOps and process automation.</li>
<li>Should have the ability to conduct post-mortem on security incidents and take post-mortem data to drive uplift in policies, procedures, standards.</li>
<li>Familiarity with CSPM, CNAPP, and Cloud EDR platforms</li>
<li>Expertise with Microsoft Defender, Sentinel and Splunk</li>
</ul>
<p>Desirable Skills/Experience</p>
<ul>
<li>Identify and articulate architectural trade-offs.</li>
<li>Embed process, governance and security into workflow and technology.</li>
<li>Design and implement software tools and services using modern programming languages.</li>
<li>Manage and lead projects delivering prioritised initiatives at challenging deadlines.</li>
<li>Exert positive influence in a matrixed organisation to drive technology evolution.</li>
<li>Drive efforts to achieve process and technology improvement at scale.</li>
</ul>
<p>The annual base pay for this position ranges from 136,044.00 - 204,066.00 USD Annual (80% - 120%). Hourly and salaried non-exempt employees will also be paid overtime pay when working qualifying overtime hours. Base pay offered may vary depending on multiple individualised factors, including market location, job-related knowledge, skills, and experience. In addition, our positions offer a short-term incentive bonus opportunity; eligibility to participate in our equity-based long-term incentive programme (salaried roles), to receive a retirement contribution (hourly roles), and commission payment eligibility (sales roles).</p>
<p>Benefits offered included a qualified retirement programme [401(k) plan]; paid vacation and holidays; paid leaves; and, health benefits including medical, prescription drug, dental, and vision coverage in accordance with the terms and conditions of the applicable plans. Additional details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an &#39;at-will position&#39; and the Company reserves the right to modify base pay (as well as any other discretionary payment or compensation programme) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.</p>
<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That&#39;s why we work, on average, a minimum of three days per week from the office. But that doesn&#39;t mean we&#39;re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.</p>
<p>AstraZeneca offers an environment where cybersecurity work has real-world impact on patients&#39; lives, not just systems and data. Here, technology experts collaborate with scientists and business teams to unlock the potential of data, analytics, AI and machine learning, constantly experimenting with new approaches while keeping critical platforms secure. There is strong investment in digital capabilities, room to explore modern tools through initiatives like hackathons, and a culture that values curiosity, coaching and continuous learning so that every day brings opportunities to grow skills and shape both personal development and the future of healthcare technology.</p>
<p>If this role matches your skills and ambitions, apply now and help protect the digital foundations that enable life-changing medicines!</p>
<p>Date Posted 17-Apr-2026 Closing Date 03-May-2026</p>
<p>Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Minimum 10 years of experience, Bachelor&apos;s Degree, Broad enterprise IT experience with significant cloud and data exposure, In-depth understanding of security and networking protocols, cryptography, and modern authentication and authorization protocols, Experience designing, deploying, and operating secure networks, systems, application and security architectures at scale, Experience configuring and managing cloud security services in an AWS, Azure and GCP at organisation at scale, Experience researching, designing, and implementing security policies, standards, and procedures, including those in cybersecurity frameworks such as MITRE ATT&amp;CK, NIST CSF, NIST SP.800- 53, and NIST SP.800-61, as well as implementing cloud security reference architectures, Experience working in a software development and systems administration organisation, implementing DevSecOps and process automation, Ability to conduct post-mortem on security incidents and take post-mortem data to drive uplift in policies, procedures, standards, Familiarity with CSPM, CNAPP, and Cloud EDR platforms, Expertise with Microsoft Defender, Sentinel and Splunk</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Cyber Security Engineering Cloud/Data</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca is a multinational pharmaceutical and biotechnology company that develops and commercializes prescription medicines and vaccines for diseases across various therapeutic areas.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689899183</Applyto>
      <Location>Gaithersburg, Maryland, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>867e3558-9a7</externalid>
      <Title>Team Lead, Java Engineer - Equities Trading Technologies</Title>
      <Description><![CDATA[<p>We are seeking a Team Lead to maintain and enhance our mission-critical, multi-asset trading platform that is used firm-wide daily. This individual will own the existing Java Swing code base, while also playing a pivotal role in designing the next-generation HTML5 trading UI.</p>
<p>The ideal candidate should have a proven track record in developing and maintaining Java-based front-end applications in the finance sector. Exceptional team collaboration skills and the ability to work effectively with colleagues across global time zones are crucial.</p>
<p>Millennium strongly prioritizes our synergistic culture, which revolves around teamwork and low egos. You should possess the ability to work in a fast-paced environment both collaboratively and individually while managing multiple projects simultaneously.</p>
<p>The successful individual will have a strong sense of urgency, emotional intelligence, and prioritize a high-caliber end-user experience.</p>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in computer science or comparable</li>
<li>7+ years of professional experience with Core Java and Java Swing, electronic trading systems and/or trader workstations environment strongly preferred.</li>
<li>5+ years of experience working with HTML, JavaScript, CSS, and JQuery</li>
<li>Deep understanding of multithreading and distributed systems within a high performance, latency-sensitive environment</li>
<li>Strong knowledge of unit testing frameworks and continuous test-driven development practices</li>
<li>Enterprise level experience with design patterns such as MVC, MV, MVP</li>
<li>Enterprise level experience with RESTful web services</li>
<li>Previous experience liaising with non-technology stakeholders, polished and proactive communication skills</li>
</ul>
<p>Beneficial/Ideal Technology Experience:</p>
<ul>
<li>EXT-JS, AngularJS, AJAX, JSON experience is very beneficial</li>
<li>Knowledge of equities, futures, options and other asset classes is preferred</li>
<li>Enterprise level experience with OMS architecture and design is preferred</li>
<li>Experience with messaging middleware, Solace preferred</li>
<li>Experience with relational and NoSQL databases. MongoDB preferred</li>
<li>Experience working with financial data, including reference data, market data, order/execution and positions data.</li>
<li>Experience working with Cloud: AWS (preferred), GCP or Azure</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Core Java, Java Swing, HTML, JavaScript, CSS, JQuery, Multithreading, Distributed systems, Unit testing frameworks, Continuous test-driven development practices, MVC, MV, MVP, RESTful web services, EXT-JS, AngularJS, AJAX, JSON, Equities, Futures, Options, OMS architecture and design, Messaging middleware, Solace, Relational databases, NoSQL databases, MongoDB, Financial data, Cloud, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides mission-critical trading platforms for the finance sector.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955412056</Applyto>
      <Location>Miami, Florida, United States of America · New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e8aabc91-c80</externalid>
      <Title>Assistant Manager of Data Analytics</Title>
      <Description><![CDATA[<p>We are seeking an experienced professional to join our team in Shanghai. As Assistant Manager of Data Analytics, you will focus on using data and analytics to drive business activities and outcomes that improve or transform customer strategy, customer segmentation, predictive models, and marketing campaigns.</p>
<p>Principal Responsibilities: The role holder will conduct customer strategy analysis focusing on acquisition, activation, retention, conversion, and LTV, and deliver actionable insights. Build and maintain customer segmentation frameworks to support targeted and personalized marketing and operations. Leverage advanced data analytics tools and methodologies to develop, validate, and optimize predictive models, contributing to generate high-quality leads. Analyze customer journey, conversion funnels, and drop-off points to identify bottlenecks and recommend experience improvements. Evaluate the performance of marketing campaigns, membership programs, loyalty initiatives, and promotional strategies by measuring ROI, conversion rate, and engagement metrics. Partner with product, marketing, operations, and customer teams to translate data insights into executable strategies and drive business decisions. Support the business team&#39;s campaign needs, including RM lead generation and manual SMS outreach. Develop and maintain customer-focused dashboards, KPIs, and reporting systems.</p>
<p>To be successful in the role, you should meet the following requirements: Minimum of 5 years&#39; experience in one or multiple skills in data/business analytics in the financial or digital domains. Demonstrated experience in process and analysis of large amounts of data using one of these: Python, R, SQL, or SAS; on environments such as AWS, Google Cloud, or Hadoop. Knowledge and experience in AI, big data, machine learning, or predictive algorithms, statistics modeling, and data mining. Excellent communication and teamwork skills, able to collaborate effectively with different departments and stakeholders. Strong problem-solving skills and innovative thinking, able to translate complex business problems into data analytics solutions. Proven experience in one or more of: customer segmentation, digital marketing, data science, portfolio analytics, use of open-source data in analyses. Good English communication skills, able to collaborate effectively with domestic and international teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, R, SQL, SAS, AWS, Google Cloud, Hadoop, AI, big data, machine learning, predictive algorithms, statistics modeling, data mining</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC International Wealth and Premier Banking</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC serves 41 million customers globally, including 6.7 million international customers, offering retail banking and wealth management services.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610677890</Applyto>
      <Location>Shanghai</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d7fadcc-6fa</externalid>
      <Title>Data Scientist Computer Vision</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a talented Data Scientist with deep learning and machine learning expertise focused on image-based data to help shape the future of agriculture. In this role, you&#39;ll join a dynamic team that supports the development of Bayer Crop Science next-generation products by applying computer vision to automate critical processes across the Plant Biotechnology organisation.</p>
<p>The primary responsibilities of this role are to:</p>
<p>Solve real agricultural problems using deep learning and AI across image and other data modalities, translating complex models into tangible business and scientific impact.</p>
<p>Design and implement end-to-end machine learning pipelines for computer vision use cases, including segmentation, classification, detection, and multi-task learning.</p>
<p>Prototype, evaluate, and iterate on cutting-edge architectures such as CNNs, Vision Transformers, foundational and large-scale vision models, ensuring state-of-the-art performance.</p>
<p>Optimize models for accuracy, robustness, and inference efficiency, including experimentation with hyperparameters, compression, and deployment-oriented optimisations.</p>
<p>Independently build scalable data pipelines for training, validation, and evaluation, including data ingestion, augmentation strategies, and active learning loops.</p>
<p>Collaborate cross-functionally with product, data, and software engineering teams to integrate models into production systems and deliver reliable, maintainable solutions.</p>
<p>Contribute to MLOps practices, including model versioning, deployment, monitoring, and retraining workflows using modern tooling and cloud-based platforms.</p>
<p>Build strong cross-functional relationships and actively engage with the broader Data Science Community to share best practices, align on standards, and co-create innovative solutions.</p>
<p>Present clear, compelling, and validated stories about experiments, results, and recommendations to peers, senior management, and internal customers to drive strategic and operational decisions.</p>
<p>We seek an incumbent who possesses the following:</p>
<p>M.S. with 2+ years of experience or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning or computer vision.</p>
<p>Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.</p>
<p>Hands-on experience with modern computer vision architectures including models such as ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, and Vision Transformers.</p>
<p>Strong background in handling large-scale datasets and creating custom datasets, for example using frameworks such as Hugging Face Datasets.</p>
<p>Solid understanding of core machine learning concepts including loss functions, regularization, optimisation, and learning rate scheduling.</p>
<p>Experience developing and deploying models using cloud-based ML platforms such as AWS SageMaker.</p>
<p>Familiarity with Unix environments, including bash, file systems, and core utilities.</p>
<p>Strong engineering practices including use of Git, Docker, CI/CD pipelines, modular codebase design, and unit testing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$109,370.40 - $164,055.60</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, Vision Transformers, Hugging Face Datasets, AWS SageMaker, Git, Docker, CI/CD pipelines, modular codebase design, unit testing</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a presence in over 100 countries.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976908666</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b7f1d2fa-e89</externalid>
      <Title>Manager, Digital Trust &amp; Security (all genders)</Title>
      <Description><![CDATA[<p>About tonies:</p>
<p>Tonies is the world&#39;s leading interactive audio platform for children, with over 10 million Tonieboxes and 125 million Tonies sold globally. Our intuitive, screen-free system empowers children to learn and play independently in a safe and engaging way.</p>
<p>As Manager Digital Trust &amp; Security, you will lead the strategic expansion of our security service offerings. You will bridge the gap between technical architecture and business-centric consulting, ensuring our digital infrastructure remains resilient while fostering consumer trust across our global operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Strategic Governance &amp; Architecture: Define the global security vision and design scalable architectures to mitigate complex cyber threats across our media and hardware footprints.</li>
<li>Trust Infrastructure Management: Architect and manage the security technology stack, ensuring our global IT assets and infrastructures remain resilient against emerging threats.</li>
<li>Proactive Risk &amp; Compliance: Lead comprehensive vulnerability assessments and recommend mitigation strategies that align with industry standards such as ISO 27001.</li>
<li>Operational Excellence: Oversee incident response lifecycles, from real-time resolution to deep-dive post-incident analysis and reporting.</li>
<li>Security Culture &amp; Training: Develop and execute global training programs to embed security awareness into the company DNA, ensuring compliance across all departments.</li>
<li>Cross-Functional Leadership: Partner with Enterprise Architecture and Application Management to integrate security-by-design into every product and internal service.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>Expertise: Several years of leadership in security or service management within the technology or consumer electronics sector.</li>
<li>Technical Breadth: Deep understanding of security frameworks, cloud security (AWS/GCP), and modern monitoring platforms.</li>
<li>Strategic Mindset: Proven ability to translate complex security risks into actionable business insights for diverse stakeholders.</li>
<li>Lateral Leadership: A collaborative leader capable of managing cross-functional initiatives in a fast-paced, global environment.</li>
<li>Communication: Professional fluency in English and German is essential for our global coordination.</li>
<li>Mandatory: Demonstrable expertise in at least two security domains, backed by relevant professional certifications.</li>
<li>Preferred: Advanced credentials in Cloud Security or specialized standards like ISO 27001.</li>
</ul>
<p>Why tonies?</p>
<p>Our benefits vary by location. The following benefits apply in Germany:</p>
<ul>
<li>Global Teamwork: We collaborate across departmental and country borders on our vision to bring the Toniebox into every child&#39;s room in the world.</li>
<li>Come as you are: This applies not only to the dress code but also to everything else. Because only where you truly feel comfortable can you give your best.</li>
<li>Mobility: Choose the option that suits you best - a Deutschlandticket (public transport ticket) for unlimited mobility, a monthly contribution for an office parking space, a leasing bicycle, or a remote work subsidy.</li>
<li>Enhanced Security: Benefit from subsidies for company pension plans, occupational pension schemes, and occupational disability insurance.</li>
<li>Rest &amp; Time Off: Enjoy 30 days of paid annual leave as well as three additional days off such as Rosenmontag, Christmas Eve, and New Year&#39;s Eve. After one year of employment, you can also use up to 10 &#39;toniecation days&#39; (unpaid leave days).</li>
<li>Continuous Learning: Benefit from our internal and external training opportunities as well as an individual learning budget to continuously expand your knowledge.</li>
<li>Language Learning &amp; Relaxation: Improve your communication skills with the language learning app Babbel and find relaxation through our access to the meditation app Calm.</li>
<li>Discounts: Benefit from attractive discounts on our entire range of tonies products.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>security frameworks, cloud security (AWS/GCP), modern monitoring platforms, vulnerability assessments, ISO 27001</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tonies GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/tonies.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Tonies is a global company producing interactive audio platforms for children, with over 10 million Tonieboxes and 125 million Tonies sold worldwide.</Employerdescription>
      <Employerwebsite>https://tonies.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://tonies.jobs.personio.com/job/2602344</Applyto>
      <Location>Düsseldorf · London · Paris · Berlin</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>79072a0c-85b</externalid>
      <Title>Behavioral Data Science Intern - Agentic AI &amp; People Analytics</Title>
      <Description><![CDATA[<p>Where do you want to go? What do you want to achieve? How would you like to get involved? At Bayer, we bring together multi-talents and specialists to feed the world, slow climate change, and create healthier, more sustainable lives for all.</p>
<p>This is the opportunity to start your career with a global leader committed to HealthForAll and HungerForNone. Bring your ideas, skills, and passion with you. Your career starts here.</p>
<p>Are you passionate about AI, data science, and behavioural insights? Join our Talent Impact team and apply your technical skills to projects that combine machine learning, generative AI, and behavioural science to improve how people work and develop. This internship offers hands-on experience in a supportive environment where you’ll learn, contribute, and make an impact.</p>
<p>Your tasks and educational objectives:</p>
<ul>
<li>Work with HR and behavioural data to create structured, analysis-ready datasets for people analytics.</li>
<li>Support development and testing agentic AI workflows (including LLM-based tools) that support HR decision-making.</li>
<li>Help to build and evaluate machine learning models to explore workforce trends, learning behaviours, and engagement.</li>
<li>Together with team members, create dashboards and visualisations that turn complex data into actionable insights for HR and business partners.</li>
<li>Apply modern data workflows using Databricks, GitHub Spaces, and cloud platforms (Azure or AWS).</li>
<li>Collaborate with experienced mentors and participate in small experiments to measure impact and share findings.</li>
</ul>
<p>Who you are:</p>
<ul>
<li>Python programming skills for data processing, modelling, and AI workflows.</li>
<li>Hands-on experience with Generative AI (GenAI) or LLM-based systems (academic projects or internships count).</li>
<li>Familiarity with cloud platforms (Azure or AWS), with a focus on Databricks and GitHub Spaces for collaborative development.</li>
<li>Solid foundation in data science and machine learning.</li>
<li>Strong interest in behavioural science, people analytics, and HR.</li>
<li>Currently enrolled in a Master’s or advanced Bachelor’s program in data science, computer science, cognitive science, psychology, behavioural economics, neuroscience, or a related field.</li>
<li>Curiosity, willingness to learn, and ability to work on-site in Leverkusen.</li>
<li>Fluent English, written and spoken.</li>
</ul>
<p>What we offer:</p>
<p>Our benefits package is flexible, appreciative, and tailored to your lifestyle, because what matters to you, matters to us!</p>
<ul>
<li>For a full-time position, you can expect an attractive salary of € 2,214 gross per month.</li>
<li>Depending on the nature of your job, flexible work arrangements can be made in alignment with your manager.</li>
<li>We support your growth through access to professional development and learning opportunities, such as LinkedIn Learning and our language learning platform Education First.</li>
<li>As one of our perks, our Corporate Benefits program grants you access to sales discounts from more than 150 brands.</li>
<li>We embrace diversity by providing an inclusive work environment in which you are welcomed, supported, and encouraged to bring your whole self to work.</li>
</ul>
<p>Ever feel burnt out by bureaucracy? Us too. That’s why we’re changing the way we work, for higher productivity, faster innovation, and better results. We call it Dynamic Shared Ownership (DSO). Learn more about what DSO will mean for you in your new role here https://www.bayer.com/en/strategy/strategy</p>
<p>Our Mission &amp; Strategy:</p>
<p>Through Dynamic Shared Ownership, we’re putting an end to the hierarchical model and putting more power in the hands of the innovators and creators at Bayer. Ready to join us? Apply now and start your 6-month learning journey in Leverkusen!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Generative AI, LLM-based systems, Cloud platforms (Azure or AWS), Databricks, GitHub Spaces, Data science, Machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company based in Germany.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949975182354</Applyto>
      <Location>Leverkusen</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a277a7cc-202</externalid>
      <Title>Staff Frontend Developer - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p><strong>Our Current Itinerary</strong></p>
<p>Are you ready to shape the future of travel tech at scale? We are seeking an exceptional Staff Frontend Developer to drive technical excellence across our entire booking funnel.</p>
<p>We&#39;re among the leading travel tech companies worldwide, growing substantially and sustainably year after year, with a mission to make vacation home booking and hosting decisions stress-free and packed with joy.</p>
<p>Our vibrant team of over 600 talented individuals from 60+ countries shares a passion for cutting-edge technology, constant improvement, and creating exceptional experiences for our 50,000 hosts and 100 million website users each year.</p>
<p><strong>Your Future Team</strong></p>
<p>As a Staff Frontend Engineer, you&#39;ll be the technical authority across all teams in the booking funnel , from the Discovery team&#39;s list pages all the way through the checkout funnel to the Post Booking experience.</p>
<p>You&#39;ll design and implement overarching frontend architecture that scales to handle millions of users, while establishing best practices that elevate the entire engineering department.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Core Technologies: TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR.</li>
<li>Data Infrastructure: DynamoDB, Redis.</li>
<li>Cloud &amp; DevOps: AWS, Kubernetes, Docker, Jenkins, Git.</li>
<li>Monitoring &amp; Analytics: Sentry, ELK, Grafana, Looker, OpsGenie, and internally developed technologies.</li>
</ul>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>Define the technical vision and strategy for the frontend engineers of GX department, aligning with organizational goals and anticipating industry trends.</li>
<li>Architect scalable, high-availability frontend systems serving 1M+ daily users across the entire booking funnel.</li>
<li>Lead the design and implementation of department-wide technical initiatives that impact conversion rates, customer satisfaction, and technical excellence.</li>
</ul>
<p><strong>Cross-Team Collaboration &amp; Influence</strong></p>
<ul>
<li>Partner with Engineering Managers and Department Leaders to shape the technical roadmap.</li>
<li>Contribute to specifications for large-scale projects, organizing parallel workstreams that reassemble into cohesive launches.</li>
</ul>
<p><strong>Technical Excellence &amp; Innovation</strong></p>
<ul>
<li>Establish, iterate on, and enforce engineering best practices (testing, documentation, architecture) department-wide.</li>
<li>Review code and set quality standards that become the gold standard across teams.</li>
</ul>
<p><strong>Mentorship &amp; Knowledge Leadership</strong></p>
<ul>
<li>Mentor senior developers, helping them grow into technical leaders.</li>
<li>Lead department-wide knowledge sharing initiatives and technical workshops.</li>
</ul>
<p><strong>Your Backpack is Filled with</strong></p>
<ul>
<li>8+ years of frontend development experience with deep expertise in JavaScript (ES6+), TypeScript, and ReactJS.</li>
<li>Proven track record of architecting large-scale frontend applications handling millions of users.</li>
<li>Expert-level proficiency with state management, performance optimization, and modern build tools.</li>
</ul>
<p><strong>Leadership &amp; Strategic Thinking</strong></p>
<ul>
<li>Demonstrated ability to define and execute technical strategies at department or company level.</li>
<li>Experience leading cross-functional initiatives and influencing without direct authority.</li>
</ul>
<p><strong>Business &amp; Domain Knowledge</strong></p>
<ul>
<li>Ability to connect technical decisions to business KPIs and department goals.</li>
<li>Experience working closely with product and business stakeholders at all levels.</li>
</ul>
<p><strong>Our Adventure Includes</strong></p>
<ul>
<li>Strategic Impact: Shape the technical direction of a rapidly growing travel tech leader.</li>
<li>Technical Excellence: Work with cutting-edge technologies and influence architectural decisions.</li>
<li>Leadership Growth: Lead initiatives that impact millions of users and mentor the next generation of engineers.</li>
</ul>
<p><strong>Want to Travel with Us?</strong></p>
<p>Take a peek into our culture on Instagram @lifeatholidu and check out Tech at Holidu to meet the people behind the product.</p>
<p>Apply now and let’s make vacation dreams come true – at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>JavaScript, TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR, DynamoDB, Redis, AWS, Kubernetes, Docker, Jenkins, Git, Sentry, ELK, Grafana, Looker, OpsGenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading travel tech company that provides vacation home booking and hosting services. It has a team of over 600 individuals from 60+ countries.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2247550</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f6deb282-e3c</externalid>
      <Title>Senior Backend Developer (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Senior Backend Developer and become part of the team that powers how our hosts&#39; vacation rentals reach the world.</p>
<p>You&#39;ll be working at the core of our distribution engine - where we take tens of thousands of homes and make them bookable on major travel platforms such as Holidu, Booking.com, Airbnb, VRBO, HomeToGo, and Check24.</p>
<p>This team operates in one of the most technically dynamic areas of our product. You will work with systems that synchronize large volumes of updates at high speed and maintain high availability, while integrating with a wide variety of partner APIs - each with its own structure and complexity.</p>
<p>It&#39;s work that demands precision, scalability, and smart engineering decisions, and it plays a crucial role in helping our hosts reach millions of guests worldwide.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and actively shape the team&#39;s direction , not just execute on it.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
<li>Ensure our applications are highly scalable, capable of handling tens of thousands of properties and millions of bookings.</li>
<li>Work with data persistence - whether in PostgreSQL, Redis, S3, or new state-of-the-art technologies you help us evaluate.</li>
<li>Ship to production daily , deploying to our AWS Kubernetes cluster is part of the routine, not a special occasion.</li>
<li>Own the reliability of your services , set up monitoring, define SLOs, and drive incident resolution so your team can move fast with confidence.</li>
<li>Collaborate in a supportive, cross-functional team that values knowledge sharing and improving together.</li>
<li>Apply engineering best practices, and stay curious by experimenting with new technologies.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Proven track record of delivering product impact through engineering , not just building services, but solving real problems for users.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS-hosted Kubernetes cluster, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that powers how vacation rentals reach the world, with tens of thousands of homes bookable on major travel platforms.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2573674</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c38c893d-e10</externalid>
      <Title>Werkstudent Environmental and Packaging Management (all genders)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Werkstudent to join our Environmental and Packaging Management team. As a key member of our team, you will play a crucial role in developing and implementing sustainable packaging solutions for our products.</p>
<p>Your main responsibilities will include:</p>
<ul>
<li>Managing packaging data, such as weights and dimensions, in our Product Lifecycle Management (PLM) system.</li>
<li>Reviewing and maintaining packaging lists, including structural and graphical components.</li>
<li>Ensuring the integration of production and logistics data by maintaining palletization and weight data in our internal systems.</li>
<li>Collaborating with our Sustainability team to input data on recycling rates and codes to track progress towards our sustainability goals.</li>
<li>Creating documentation summaries for technical packaging information to share with Procurement and suppliers.</li>
<li>Supporting the definition of processes for central data initiatives that impact the working methods of the packaging team.</li>
</ul>
<p>As a successful candidate, you will have a strong background in packaging technology, packaging development management, or a related field. You will be precise, meticulous, and enjoy organizing complex information. You will also be able to work independently, manage your time effectively, and adapt quickly to new requirements and regulations.</p>
<p>In addition to your technical skills, you will be fluent in German and English, both in writing and speaking. You will also have a strong interest in legal frameworks, such as sustainability, packaging laws, and the new EU Packaging Regulation (PPWR).</p>
<p>We offer a dynamic and supportive work environment, with opportunities for professional growth and development. We value flexibility and collaboration, and we encourage our employees to take ownership of their work and contribute to the success of the company.</p>
<p>If you&#39;re passionate about sustainability, packaging, and innovation, and you&#39;re looking for a challenging and rewarding role, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>working_student</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>packaging technology, packaging development management, product lifecycle management, sustainability, recycling, packaging laws, EU Packaging Regulation (PPWR), German, English, complex information organization, time management, adaptability, legal frameworks</Skills>
      <Category>Operations</Category>
      <Industry>Manufacturing</Industry>
      <Employername>tonies GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/tonies.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Tonies is a leading interactive audio platform for children, with over 10 million sold Tonieboxes and over 125 million sold Tonies worldwide.</Employerdescription>
      <Employerwebsite>https://tonies.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://tonies.jobs.personio.com/job/2557639</Applyto>
      <Location>Schwäbisch Gmünd</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bcb4d82-b90</externalid>
      <Title>Working Student Backend Engineering (all genders)</Title>
      <Description><![CDATA[<p>You will be working as a Working Student in the Account Compliance &amp; Experience (ACE) team, which is responsible for delivering secure and seamless flows for account lifecycle, relationship, and compliance to customers.</p>
<p>As a Working Student, you will contribute to the development of new backend features across the ACE domain, assist with operational tasks, get hands-on with modern AI-assisted development, and support ongoing tech refactoring efforts.</p>
<p>You will work directly alongside senior engineers, take part in real product development, and gradually build ownership over meaningful parts of our codebase.</p>
<p>The ACE team works within Holidu&#39;s broader backend ecosystem, using Java/Kotlin with Spring Boot, PostgreSQL, Redis, and other data stores, as well as AWS services and Jenkins for CI/CD.</p>
<p>You will have the opportunity to attend team planning sessions, architecture discussions, and retrospectives, giving you a real window into how a senior engineering team operates in a high-growth company.</p>
<p>We offer a fair salary, impact, growth, community, flexibility, and fitness opportunities.</p>
<p>You will be required to work ~20 hours per week, with 1-2 days per week in the office in Munich.</p>
<p>You should be currently enrolled in a degree in Computer Science, Software Engineering, or a related field, have a solid understanding of object-oriented programming and basic software design principles, and some hands-on experience with Java or Kotlin.</p>
<p>You should also have familiarity with RESTful APIs and relational databases (SQL), a genuine curiosity for backend systems, and a product-minded attitude.</p>
<p>Excellent communication skills in English are required, and German is a plus but not required.</p>
<p>Bonus points if you have exposure to Spring Boot, cloud platforms (AWS), or any experience with identity/access management concepts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>working_student</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, PostgreSQL, Redis, AWS services, Jenkins, CI/CD, RESTful APIs, relational databases (SQL), cloud platforms (AWS), identity/access management concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides a host platform for property owners and managers.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2605407</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80d15de9-aa7</externalid>
      <Title>Senior Data Scientist - Rankings &amp; Recommendations (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers.</p>
<p>You will join a cross-functional Product team, Search Intelligence, which is responsible for optimizing ranking and recommendations for users visiting our website.</p>
<p>You&#39;ll be part of the broader Data Science team, which operates across cross-functional domain teams - giving you access to shared knowledge, best practices, and collaboration opportunities beyond your domain.</p>
<p>You’ll collaborate daily with Data Engineers, Analysts, Product Managers, and Back-end Engineers.</p>
<p>You’ll report to the Team Lead, Data Science.</p>
<p>Together, we turn data into actionable insights and innovative technology that powers how millions of guests find and book their perfect holiday home.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Python • Airflow • dbt • AWS (SageMaker, Redshift, Athena) • MLflow</li>
</ul>
<p>The Ranking challenge at Holidu</p>
<p>Holidu lists over 4 million vacation rental properties. Our ranking and personalization systems determine which of them our 70+ million annual users see, directly impacting search conversion and business results.</p>
<p>What&#39;s live today:</p>
<ul>
<li>Multi-stage ranking pipeline: Reinforcement-learning-based cold ranking, contextual re-ranking, and personalized recommendations.</li>
</ul>
<ul>
<li>Cold-start models for new properties with limited behavioral data.</li>
</ul>
<ul>
<li>Personalized recommendations based on user browsing patterns.</li>
</ul>
<p>Some of the hard problems we&#39;re solving:</p>
<ul>
<li>Multi-objective optimization: Balancing user relevance, conversion probability, and business value.</li>
</ul>
<ul>
<li>Personalization without history: Most users are anonymous or first-time visitors.</li>
</ul>
<ul>
<li>Cold-start: A significant share of our inventory is new each quarter. How do we surface quality properties before we have behavioral data?</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>You&#39;ll shape the ranking and recommendation systems that millions of guests rely on to find their holiday home. With access to extensive datasets and modern ML infrastructure, you&#39;ll work end-to-end - from identifying opportunities and prototyping new approaches to shipping models to production and measuring their impact.</p>
<ul>
<li>Develop high-impact models and improvements for our ranking, recommendation, and personalization systems - with the freedom to explore new, creative approaches.</li>
</ul>
<ul>
<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>
</ul>
<ul>
<li>Design and run A/B tests as a core part of ranking development; success is measured by successful experiments per quarter and time-to-decision.</li>
</ul>
<ul>
<li>Collaborate closely with Product Managers and Software Engineers to identify, prioritize, and ship ranking improvements.</li>
</ul>
<ul>
<li>Ensure model reliability in production, measured by online/offline agreement, model and data drift KPIs, latency and uptime SLAs, and automated monitoring coverage.</li>
</ul>
<ul>
<li>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</li>
</ul>
<ul>
<li>Demonstrate leadership in data science projects by driving technical direction, scoping initiatives, and guiding the team&#39;s prioritization and project execution.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>5+ years of experience as a Data Scientist, with a proven track record of applying ML models to solve real business problems.</li>
</ul>
<ul>
<li>Experience working on ranking models or recommender systems is a strong advantage.</li>
</ul>
<ul>
<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>
</ul>
<ul>
<li>Strong foundations in statistics, predictive modeling, and machine learning techniques, with hands-on experience using Python and SQL.</li>
</ul>
<ul>
<li>Experience with Airflow and dbt is a plus.</li>
</ul>
<ul>
<li>Solid understanding of business operations and the ability to translate data insights into clear, actionable outcomes.</li>
</ul>
<ul>
<li>A collaborative mindset and enthusiasm for using data to build world-class products that make a real impact.</li>
</ul>
<ul>
<li>AI Proficiency: You are comfortable using AI to enhance coding, planning, and monitoring. This includes successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p>Need a sneak peek? Check out the adventure that awaits you on Instagram @lifeatholidu and dive straight into the world of Tech at Holidu for more insights!</p>
<p><strong>Want to travel with us?</strong></p>
<p>Apply online on our careers page! Your first travel contact will be Lucia from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Airflow, dbt, AWS, MLflow, Machine Learning, Statistics, Predictive Modeling, SQL, AI, Data Science, Ranking Models, Recommender Systems, Collaboration, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, listing over 4 million properties and serving 70+ million annual users.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2413808</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8d34aff-3e5</externalid>
      <Title>Applied AI Engineer, Global Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re hiring Applied AI Engineers to build custom end-to-end AI applications for our public sector clients using the latest developments in the field of AI.</p>
<p>You will partner with public sector clients to deeply understand their challenges and define AI-driven solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and deploying end-to-end AI applications into production leveraging latest developments from the biggest AI labs, and open source models</li>
<li>Collaborating with cross-functional teams, including data annotation specialists, to create high-quality training datasets</li>
<li>Designing and maintaining robust evaluation frameworks to ensure the reliability and effectiveness of AI models</li>
<li>Participating in customer engagements, including occasional travel (approximately two weeks per quarter)</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>A strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience)</li>
<li>7+ years of post-graduation engineering experience, with demonstrated proficiency in languages such as Python, TypeScript/JavaScript, Java, or C++</li>
<li>2+ years of experience applying AI/ML in production environments, such as deploying deep learning solutions, building generative/agentic AI applications or setting up evaluations pipelines</li>
<li>Familiarity with cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure)</li>
<li>Strong problem-solving skills, with a data-driven approach to iterating on machine learning models and datasets</li>
<li>Excellent written and verbal communication skills to collaborate effectively in a cross-functional environment</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience working at a startup, particularly as founding engineer</li>
<li>Experience building and deploying large-scale AI solutions</li>
<li>Strong written and verbal communication skills to operate in a cross-functional team environment</li>
<li>Proficiency in Arabic (if focused on language models)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/JavaScript, Java, C++, Cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure), Experience working at a startup, particularly as founding engineer, Experience building and deploying large-scale AI solutions, Strong written and verbal communication skills to operate in a cross-functional team environment, Proficiency in Arabic (if focused on language models)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4413992005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d67909d-97e</externalid>
      <Title>Senior Machine Learning Engineer - Model Evaluations, Public Sector</Title>
      <Description><![CDATA[<p>The Public Sector ML team at Scale deploys advanced AI systems, including LLMs, agentic models, and multimodal pipelines, into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints.</p>
<p>As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and maintaining automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations.</li>
</ul>
<ul>
<li>Designing test datasets and benchmarks to measure generalization, bias, explainability, and failure modes.</li>
</ul>
<ul>
<li>Building evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing.</li>
</ul>
<ul>
<li>Conducting comparative analyses of model architectures, training procedures, and evaluation outcomes.</li>
</ul>
<ul>
<li>Implementing tools for continuous monitoring, regression testing, and quality assurance for ML systems.</li>
</ul>
<ul>
<li>Designing and executing stress tests and red-teaming workflows to uncover vulnerabilities and edge cases.</li>
</ul>
<ul>
<li>Collaborating with operations teams and subject matter experts to produce high-quality evaluation datasets.</li>
</ul>
<p>This role requires an active security clearance or the ability to obtain a security clearance.</p>
<p>Ideal candidates will have experience in computer vision, deep learning, reinforcement learning, or NLP in production settings, strong programming skills in Python, and background in algorithms, data structures, and object-oriented programming.</p>
<p>Nice to have qualifications include graduate degree in CS, ML, or AI, cloud experience (AWS, GCP), and model deployment experience.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$240,450-$300,300 USD (San Francisco, New York, Seattle) $216,300-$269,850 USD (Washington DC, Texas, Colorado, Hawaii)</Salaryrange>
      <Skills>Python, TensorFlow, PyTorch, Computer Vision, Deep Learning, Reinforcement Learning, NLP, Algorithms, Data Structures, Object-Oriented Programming, Graduate Degree in CS, ML, or AI, Cloud Experience (AWS, GCP), Model Deployment Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4631848005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0d93c05c-6c2</externalid>
      <Title>Trade Compliance Counsel</Title>
      <Description><![CDATA[<p>We are hiring a Trade Compliance Counsel to help build and mature our sanctions and export controls compliance programs at Anthropic. In this role, you will provide strategic legal advice on U.S. and international trade compliance matters, including economic sanctions administered by the Office of Foreign Assets Control (OFAC) and export controls administered by the Bureau of Industry and Security (BIS) and other regulatory bodies.</p>
<p>You will play a critical role in shaping Anthropic&#39;s approach to trade compliance as we scale, advising on the development of robust compliance programs that support our mission to develop AI systems that are safe, beneficial, and understandable.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide legal counsel on U.S. and international sanctions and export control laws and regulations, including OFAC sanctions programs, the Export Administration Regulations (EAR), the International Traffic in Arms Regulations (ITAR), and other applicable trade control regimes</li>
</ul>
<ul>
<li>Advise on the application of sanctions and export controls to Anthropic&#39;s products, services, business operations, and commercial transactions</li>
</ul>
<ul>
<li>Support the development and maturation of Anthropic&#39;s sanctions and export controls compliance programs, including policies, procedures, and controls</li>
</ul>
<ul>
<li>Partner with Integrity, Compliance, Security, Finance, People, and Operations teams to implement scalable trade compliance solutions as the company grows</li>
</ul>
<ul>
<li>Advise on sanctions and export control considerations in customer and third-party transactions, including contract negotiations and due diligence</li>
</ul>
<ul>
<li>Support customer-facing teams in addressing trade compliance queries from commercial partners and enterprise customers</li>
</ul>
<ul>
<li>Monitor legal and regulatory developments in sanctions and export controls, and advise leadership on their implications for Anthropic&#39;s business</li>
</ul>
<ul>
<li>Manage engagement and communications with relevant government agencies, including BIS, OFAC, and other authorities as needed</li>
</ul>
<ul>
<li>Support internal investigations and voluntary disclosures relating to potential trade compliance matters</li>
</ul>
<ul>
<li>Develop and deliver training to employees on sanctions and export control requirements</li>
</ul>
<ul>
<li>Advise on the intersection of trade compliance and national security law in coordination with Anthropic’s national security legal counsel, including on matters where sanctions or export control considerations overlap with broader security concerns relating to frontier AI</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>A JD and active membership in at least one U.S. state bar</li>
</ul>
<ul>
<li>At least 10-12 years of experience advising on U.S. sanctions and export control laws, with a strong understanding of OFAC sanctions programs, the EAR, and related regulatory frameworks</li>
</ul>
<ul>
<li>Experience advising technology companies on trade compliance matters, particularly in areas such as cloud computing, AI, semiconductors, or other emerging technologies subject to U.S. export controls</li>
</ul>
<ul>
<li>Experience building or maturing trade compliance programs in a high-growth or scaling environment</li>
</ul>
<ul>
<li>Ability to provide practical, business-oriented legal advice in a fast-paced environment, including in areas without established precedent</li>
</ul>
<ul>
<li>Experience engaging with government authorities such as BIS, OFAC, and DDTC, including on licensing, interpretive guidance, or enforcement matters</li>
</ul>
<ul>
<li>Ability to excel at cross-functional collaboration and effectively communicate complex legal concepts to technical, compliance, and business teams</li>
</ul>
<ul>
<li>Comfort with ambiguity and sound judgment in novel or evolving regulatory areas</li>
</ul>
<ul>
<li>Passion about responsible AI development and Anthropic&#39;s mission</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>A mix of law firm and in-house experience</li>
</ul>
<ul>
<li>Experience at a hyper-scaling tech company or in a fast-paced environment</li>
</ul>
<ul>
<li>Familiarity with non-U.S. export control and sanctions regimes (e.g., EU, UK)</li>
</ul>
<ul>
<li>Experience at or working with closely with OFAC, BIS, DDTC, or other government agencies responsible for administering trade controls</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$385,000 USD</Salaryrange>
      <Skills>U.S. sanctions and export control laws, OFAC sanctions programs, Export Administration Regulations (EAR), International Traffic in Arms Regulations (ITAR), Trade compliance, National security law, Government regulations, Compliance programs, Risk management, Business operations, Commercial transactions, Contract negotiations, Due diligence, Customer-facing teams, Internal investigations, Voluntary disclosures, Employee training, Complex legal concepts, Cross-functional collaboration, Communication skills, Ambiguity tolerance, Sound judgment</Skills>
      <Category>Legal</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company developing AI systems. It has a growing team of researchers, engineers, and policy experts.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5110558008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3cc878fa-5d1</externalid>
      <Title>Infrastructure Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our core infrastructure in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers</li>
<li>Implementing custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Collaborating with platform, product teams and our customers directly to develop and implement innovative infrastructure that scales to meet evolving needs</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
<li>Excellent communication skills with the ability to explain technical concepts to both technical and non-technical audiences</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, GCP, Azure, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4665557005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61e346b2-915</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>
<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>740da2af-174</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p>We are seeking a Senior Security Engineer with a specialty in Detection and Incident Response to join our Security Engineering team. This role sits at the intersection of security operations and software engineering, requiring you to investigate incidents and build the systems that detect, contain, and prevent them.</p>
<p>You will design and ship high-precision detections across cloud services and enterprise SaaS, develop automation that shortens response timelines, and mature the telemetry pipelines that make it all possible. Your ability to write production-quality code is just as important as your ability to triage an alert.</p>
<p>Responsibilities:</p>
<ul>
<li>Engineer, test, and deploy detection logic across cloud and enterprise environments, treating detections as software with version control, peer review, and measurable performance.</li>
</ul>
<ul>
<li>Build and maintain incident response automation, runbooks, and tooling that reduce containment timelines without sacrificing developer velocity.</li>
</ul>
<ul>
<li>Mature telemetry pipelines through improved schema design, normalization, enrichment, and quality checks that reduce false positives and increase signal fidelity.</li>
</ul>
<ul>
<li>Perform digital incident investigations to identify and contain potential security breaches.</li>
</ul>
<ul>
<li>Conduct digital forensics and malware analysis to understand attack vectors and adversary methodologies.</li>
</ul>
<ul>
<li>Integrate alerting with messaging and ticketing systems to enable fast, traceable response workflows.</li>
</ul>
<ul>
<li>Partner cross-functionally with IT, security, and engineering teams to harden identity and access patterns, close logging and forensics gaps, and implement maintainable guardrails that scale with the organisation.</li>
</ul>
<ul>
<li>Utilize threat intelligence platforms to improve hunting, detection, and response workflows.</li>
</ul>
<ul>
<li>Clearly explain the significance and impact of incidents, providing actionable recommendations to both technical and non-technical stakeholders.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>5+ years of experience in Detection Engineering, Incident Response, or Security Operations, with a strong emphasis on building and shipping security tooling and automation.</li>
</ul>
<ul>
<li>Proficiency in at least one programming language (e.g., Python, Go) and comfort writing production-grade code , not just scripts.</li>
</ul>
<ul>
<li>Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments.</li>
</ul>
<ul>
<li>Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically.</li>
</ul>
<ul>
<li>Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs.</li>
</ul>
<ul>
<li>Familiarity with digital forensics tools and malware analysis techniques.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate.</li>
</ul>
<ul>
<li>Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows.</li>
</ul>
<ul>
<li>Strong communication skills, with the ability to translate complex security findings into clear business impact.</li>
</ul>
<ul>
<li>Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>Detection Engineering, Incident Response, Security Operations, Cloud Services, Enterprise SaaS, Automation, Telemetry Pipelines, Digital Forensics, Malware Analysis, Threat Intelligence Platforms, SIEM, EDR, SOAR, Cloud-Native Environments, Programming Languages, Python, Go, Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments, Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically, Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs, Familiarity with digital forensics tools and malware analysis techniques, Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate, Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows, Strong communication skills, with the ability to translate complex security findings into clear business impact, Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4684073005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a400e696-2d2</externalid>
      <Title>Staff Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. As a Staff Software Engineer, you will own large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
<li>Being able, and willing, to multi-task and learn new technologies quickly</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>7+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$248,400-$310,500 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops and provides AI systems for critical decision-making. It offers products and technologies for building, deploying, and overseeing AI applications.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4569678005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd7327f8-fcf</externalid>
      <Title>Staff Software Engineer, Full-Stack - Enterprise Gen AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a frontend-focused full-stack engineer to help build AI-powered applications that redefine enterprise workflows and push the boundaries of interactive AI. As a staff software engineer, you&#39;ll work on a mix of cutting-edge customer-facing AI applications and internal SaaS products. Our engineering team powers projects like TIME&#39;s Person of the Year AI experience, where our AI technology helped shape one of the most iconic features in media. You&#39;ll also contribute to Scale&#39;s GenAI Platform (SGP), a powerful system that enables businesses to build and deploy AI agents at scale.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and enhancing user-facing AI applications for major enterprise customers, including high-profile media and Fortune 500 companies</li>
<li>Developing and refining features for Scale&#39;s GenAI Platform, empowering businesses to build, deploy, and manage AI-driven agents</li>
<li>Designing, building, and optimizing polished, high-performance UIs using Next.js, React, TypeScript, and Tailwind</li>
<li>Working closely with product managers, designers, and AI/ML teams to create seamless, intuitive, and impactful user experiences</li>
<li>Integrating frontend applications with backend services, working with APIs, authentication systems, and cloud-based infrastructure</li>
</ul>
<p>In this role, you&#39;ll have the opportunity to shape the future of AI-powered user experiences, working on projects that impact millions of users while developing tools that empower businesses to deploy AI at scale.</p>
<p>The base salary range for this full-time position in our hub locations of San Francisco, New York, or Seattle is $248,400,$310,500 USD. Compensation packages at Scale include base salary, equity, and benefits. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,400—$310,500 USD</Salaryrange>
      <Skills>Next.js, React, TypeScript, Tailwind, AI/ML, APIs, Authentication systems, Cloud-based infrastructure, FastAPI, PostgreSQL, GraphQL, AWS, Azure, GCP, Data-rich web platforms, Interactive AI applications, Agent-based systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4529529005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd00b53a-6fa</externalid>
      <Title>Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will be responsible for owning large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning large new areas within our product</li>
<li>Working across backend, frontend, and interacting with LLMs and ML models</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4513943005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14499a71-fa9</externalid>
      <Title>Software Engineer, Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
</ul>
<ul>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
</ul>
<ul>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
</ul>
<ul>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
</ul>
<ul>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
</ul>
<ul>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
</ul>
<ul>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
</ul>
<ul>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
</ul>
<ul>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
</ul>
<ul>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
</ul>
<ul>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
</ul>
<ul>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
</ul>
<ul>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4536653005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc75c6b0-4db</externalid>
      <Title>Machine Learning Fellow - Human Frontier Collective (Canada)</Title>
      <Description><![CDATA[<p>This is a fully remote, 1099 independent contractor opportunity with an estimated duration of six months and the potential for extension.</p>
<p>As an HFC Fellow, you&#39;ll apply your academic and professional expertise to help design, evaluate, and interpret advanced generative AI systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging in high-impact projects with partnered AI labs and platforms</li>
<li>Designing, reviewing, and optimising PyTorch models</li>
<li>Evaluating complex ML code and AI-generated implementations for efficiency and correctness</li>
<li>Advising on GPU optimisation, scaling, and trade-offs</li>
</ul>
<p>You&#39;ll also become part of a supportive, interdisciplinary network of innovators and thought leaders committed to advancing frontier AI across domains.</p>
<p>Collaboration with Scale&#39;s research team to co-author technical reports and research papers is also expected.</p>
<p>To be eligible, candidates must be authorised to work in Canada and have a PhD or postdoctoral degree in Computer Science, Computer Engineering, or a related field.</p>
<p>Professional background as a Machine Learning Engineer or Data Scientist with 1-3+ years of experience is also required.</p>
<p>Strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow) is essential, along with experience with cloud infrastructure (AWS) and MLOps tools (Docker, Langchain).</p>
<p>A detail-oriented, innovative thinker with a passion in applied AI research and a commitment to collaboration is ideal.</p>
<p>Flexible schedule with 10–40 hour weeks that fit around your life and other commitments is offered.</p>
<p>Project pay rates vary across platforms and are depending on a number of factors, including but not limited to; projects, scope, skillset, and location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, AWS, Docker, Langchain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Human Frontier Collective</Employername>
      <Employerlogo>https://logos.yubhub.co/humanfrontiercollective.com.png</Employerlogo>
      <Employerdescription>The Human Frontier Collective is a programme that brings together top researchers and domain experts to collaborate on high-impact work in AI.</Employerdescription>
      <Employerwebsite>https://humanfrontiercollective.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4661650005</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6cae1ee9-b93</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>
<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>
<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Troubleshoot and resolve complex customer issues related to Databricks platform</li>
<li>Provide best practices support for custom-built solutions developed by Databricks customers</li>
<li>Deliver suggestions for improving performance in customer-specific environments</li>
<li>Assist with issues around third-party integrations with Databricks environment</li>
<li>Demonstrate and coordinate with engineering and escalation teams to achieve resolution of customer issues and requests</li>
<li>Participate in the creation and maintenance of company documentation and knowledge articles</li>
<li>Be a true proponent of customer advocacy</li>
<li>Strengthen your AWS/Azure and Databricks platform expertise through learning and internal training programs</li>
<li>Participate in weekend and weekday on call rotation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years experience designing, building, testing, and maintaining Python/Java/Scala based applications</li>
<li>Expert level knowledge in python is desired</li>
<li>Strong experience with SQL-based database is required</li>
<li>Linux/Unix administration skills</li>
<li>Hands-on experience with AWS, Azure or GCP</li>
<li>Experience with &quot;Distributed Big Data Computing&quot; environment</li>
<li>Technical degree or the equivalent experience</li>
<li>Written and spoken proficiency in both Japanese and English</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, SQL, Linux/Unix, AWS, Azure, GCP, Distributed Big Data Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8488552002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d4c3fc5-2ed</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fb1f459e-b3a</externalid>
      <Title>Machine Learning Research Scientist / Engineer, Reasoning</Title>
      <Description><![CDATA[<p>About Scale</p>
<p>At Scale, our mission is to accelerate the development of AI applications. We&#39;re looking for a Machine Learning Research Scientist/Engineer to join our team and help us shape the future of AI.</p>
<p>This role operates at the forefront of AI research and real-world implementation, with a strong focus on reasoning within large language models (LLMs). You will study the data types critical for advancing LLM-based agents, including browser and software engineering (SWE) agents. You will play a key role in shaping Scale&#39;s data strategy by identifying the most effective data sources and methodologies for improving LLM reasoning.</p>
<p>Success in this role requires a deep understanding of LLMs, planning algorithms, and novel approaches to agentic reasoning, as well as creativity in tackling challenges related to data generation, model interaction, and evaluation. You will contribute to impactful research on language model reasoning, collaborate with external researchers, and work closely with engineering teams to bring state-of-the-art advancements into scalable, real-world solutions.</p>
<p>Responsibilities</p>
<ul>
<li>Study the data types critical for advancing LLM-based agents, including browser and software engineering (SWE) agents</li>
<li>Shape Scale&#39;s data strategy by identifying the most effective data sources and methodologies for improving LLM reasoning</li>
<li>Contribute to impactful research on language model reasoning</li>
<li>Collaborate with external researchers</li>
<li>Work closely with engineering teams to bring state-of-the-art advancements into scalable, real-world solutions</li>
</ul>
<p>Requirements</p>
<ul>
<li>Practical experience working with LLMs, with proficiency in frameworks like PyTorch, JAX, or TensorFlow</li>
<li>A track record of published research in top ML and NLP venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICML, ICLR, CoLLM, etc.)</li>
<li>At least three years of experience solving complex ML challenges, either in a research setting or product development, particularly in areas related to LLM capabilities and reasoning</li>
<li>Strong written and verbal communication skills, along with the ability to work effectively across teams</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Hands-on experience fine-tuning open-source LLMs or leading bespoke LLM fine-tuning projects using PyTorch/JAX</li>
<li>Research and practical experience in building applications and evaluations related to LLM-based agents, including tool-use, text-to-SQL, browser agents, coding agents, and GUI agents</li>
<li>Experience with agent frameworks such as OpenHands, Swarm, LangGraph, or similar</li>
<li>Familiarity with advanced agentic reasoning techniques such as STaR and PLANSEARCH</li>
<li>Proficiency in cloud-based ML development, with experience in AWS or GCP environments</li>
</ul>
<p>Benefits</p>
<ul>
<li>Comprehensive health, dental and vision coverage</li>
<li>Retirement benefits</li>
<li>A learning and development stipend</li>
<li>Generous PTO</li>
<li>Commuter stipend</li>
</ul>
<p>Salary Range</p>
<p>$252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>PyTorch, JAX, TensorFlow, Large Language Models (LLMs), Planning Algorithms, Agentic Reasoning, Data Generation, Model Interaction, Evaluation, Agent Frameworks, Cloud-Based ML Development, AWS, GCP, STaR, PLANSEARCH</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI is a leading AI data foundry that provides high-quality data to drive progress toward Artificial General Intelligence (AGI). It was founded 8 years ago and has since become a major player in the AI industry.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4605596005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>057b8651-835</externalid>
      <Title>AI Strategy Consultant, Frontier Tech</Title>
      <Description><![CDATA[<p>As a member of our Frontier Tech Consultant team, you will play a critical role in advancing cutting-edge AI innovations by conducting high-impact experiments and ensuring seamless execution at the highest quality standards.</p>
<p>Your work will directly contribute to Scale AI’s growth, shaping the future of artificial intelligence. In this role, you will be working on various types of projects, including but not limited to: research experiments, dataset generation, data quality improvements, and in-depth technical analysis.</p>
<p>You will tackle complex, technical and operational challenges while collaborating closely with Scale’s ML research scientists and SPM team.</p>
<p>The ideal candidate is analytical, detail-oriented, and results-driven, with strong problem-solving abilities and excellent communication skills.</p>
<p>We are looking for someone who thrives in a fast-paced environment, is proactive in overcoming challenges, and is committed to delivering exceptional outcomes.</p>
<p>If you are eager to contribute to the forefront of AI innovation, we encourage you to apply.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and execute research experiments</li>
<li>Build and evaluate frontier LLM datasets</li>
<li>Develop training and testing material for frontier pipelines</li>
<li>Improve quality of existing and new products</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>Strong machine learning knowledge, either by being in the final years of a ML PhD career or having already graduated</li>
<li>Strong writing and verbal communication skills</li>
<li>An action-oriented mindset that balances creative problem solving with the scrappiness to ultimately deliver results</li>
<li>Analytical, planning, and process improvement capability</li>
<li>Experience working in a fast-paced, entrepreneurial environment</li>
<li>Technical skills including familiarity with Python, GPU, AWS, API, LLM, ML, and SQL</li>
</ul>
<p>Pay: $60-80/hr</p>
<p>Commitment: This is a fully remote, US-based part-time (10-20 hours per week), on-going contract position staffed via HireArt.</p>
<p>HireArt values diversity and is an Equal Opportunity Employer. We are interested in every qualified candidate who is eligible to work in the United States. Unfortunately, we are not able to sponsor visas, including CPT/OPT or employ corp-to-corp.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$60-80/hr</Salaryrange>
      <Skills>Python, GPU, AWS, API, LLM, ML, SQL, Machine Learning, Data Analysis, Problem Solving</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4472223005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88ec8f26-4c9</externalid>
      <Title>Senior IT Systems Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic thinker and proven problem-solver with deep expertise in modern IT ecosystems. As a Sr. IT Systems Engineer, you&#39;ll lead the design, implementation, administration, and optimization of core SaaS platforms, including Okta, Google Workspace, Slack, Atlassian, and other IT tools. You&#39;ll own end-to-end support, monitoring, troubleshooting, and performance tuning of applications, systems, and their complex interconnections,ensuring high availability, security, and seamless user experience.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and implementing SaaS platforms and IT tools</li>
<li>Providing technical guidance to support business expansion, system scalability, and infrastructure maturity</li>
<li>Identifying gaps, risks, and opportunities in the environment and leading initiatives to enhance security posture, operational efficiency, and resilience</li>
<li>Evaluating emerging technologies, IAM trends, and automation platforms and developing business cases and adoption recommendations</li>
<li>Mentoring junior engineers and collaborating with cross-functional teams to align IT capabilities with organizational goals</li>
</ul>
<p>Basic qualifications include 8+ years of hands-on experience administering and optimizing a broad portfolio of SaaS applications in a hybrid and high-growth environment, with advanced proficiency in our core stack: Okta (including Advanced Server Access &amp; Workflows), Google Workspace, Slack Enterprise, Atlassian, etc.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$184,000 - $276,000 USD</Salaryrange>
      <Skills>Okta, Google Workspace, Slack, Atlassian, IAM principles and protocols, APIs for custom integrations, Scripting and automation for monitoring, alerting, and operational efficiency, Azure, AWS, GCP cloud platforms, n8n, Okta Workflows, Workato, Zapier, BetterCloud, custom integrations</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5071895007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f2bc1be2-478</externalid>
      <Title>Senior Technical Solutions Engineer, Platform</Title>
      <Description><![CDATA[<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>
<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>
<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>
<p>You will report to the Senior Manager of Technical Solutions.</p>
<p>Key responsibilities include: Troubleshooting and resolving complex customer issues related to Databricks platform Providing best practices support for custom-built solutions developed by Databricks customers Delivering suggestions for improving performance in customer-specific environments Assisting with issues around third-party integrations with Databricks environment Demonstrating and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests Participating in the creation and maintenance of company documentation and knowledge articles Being a true proponent of customer advocacy Strengthening your AWS/Azure and Databricks platform expertise through learning and internal training programs Participating in weekend and weekday on call rotation</p>
<p>Requirements include: Minimum 4 years experience designing, building, testing, and maintaining Python/Java/Scala based applications Expert level knowledge in python is desired Solid experience with SQL-based database is required Linux/Unix administration skills Hands-on experience with AWS, Azure or GCP Candidate must possess excellent English written and oral communication skills Experience with &quot;Distributed Big Data Computing&quot; environment Technical degree or the equivalent experience</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, SQL, Linux/Unix administration, AWS, Azure, GCP, Distributed Big Data Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data-driven organisations.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7902994002</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04c1ff49-2d1</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8396801002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859cb1cf-b9c</externalid>
      <Title>Senior AI Infrastructure Engineer, Model Serving Platform</Title>
      <Description><![CDATA[<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>
<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>
<li>Build an internal platform to empower LLM capability discovery.</li>
<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>
<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>
<li>Develop monitoring and observability solutions to ensure system health and performance.</li>
<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>5+ years of experience building large-scale, high-performance backend systems.</li>
<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>
<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>
<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>
<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>
<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>
<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4520320005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f86a39bf-9a5</externalid>
      <Title>Solutions Architect - Digital Native Business, Strategic</Title>
      <Description><![CDATA[<p>As a Solutions Architect on the Digital Natives team, you will work with leading data engineering, data science, and ML teams to push the boundaries of what big data architectures are capable of.</p>
<p>Reporting to the Field Engineering Manager, you will collaborate with strategic customers, product teams, and the broader customer-facing team to develop architectures and solutions using our platform and APIs.</p>
<p>You will guide customers through the competitive landscape, best practices, and implementation; and develop technical champions along the way.</p>
<p>We are looking for high technical aptitude individuals with a deep sense of ownership and a desire to help customers ship solutions at production scale.</p>
<p>Ideal candidates are deeply curious, capable of operating with confidence in ambiguous situations, and are extremely adaptable.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems.</li>
</ul>
<ul>
<li>Drive technical discovery and solution design, focusing on winning competitive deals and accelerating time-to-value in strategic accounts.</li>
</ul>
<ul>
<li>Continuously research &amp; learn new technologies and their implementations on Databricks.</li>
</ul>
<ul>
<li>Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science, and machine learning, and SQL analysis workflows.</li>
</ul>
<ul>
<li>As well as validating integrations with cloud services, home-grown tools, and other 3rd party applications.</li>
</ul>
<ul>
<li>Collaborate with your fellow Solutions Architects, using your skills to support each other and our customers.</li>
</ul>
<ul>
<li>Become an expert in, promote, and recruit contributors for Databricks-inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community.</li>
</ul>
<ul>
<li>Work closely with account executives to create and execute account penetration strategies, focusing on winning technical decision-makers and building new customer champions.</li>
</ul>
<ul>
<li>Build trusted advisor relationships with senior and executive stakeholders by articulating the business value of Databricks in clear, outcomes-driven terms.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role.</li>
</ul>
<ul>
<li>Experience building distributed data systems.</li>
</ul>
<ul>
<li>Comfortable programming in, and debugging, Python and SQL.</li>
</ul>
<ul>
<li>Have built solutions with public cloud providers such as AWS, Azure, or GCP.</li>
</ul>
<ul>
<li>Expertise in one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, pytorch, Tensorflow)</li>
</ul>
<ul>
<li>Strong executive presence with the ability to influence C/VP-level stakeholders and align technical solutions to strategic business priorities.</li>
</ul>
<ul>
<li>Available to travel to customers in your region.</li>
</ul>
<ul>
<li>[Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research).</li>
</ul>
<ul>
<li>Nice to have: Databricks Certification.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Data Engineering technologies, Data Science and Machine Learning technologies, Python, SQL, Cloud providers (AWS, Azure, GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8434467002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477935cf-ac5</externalid>
      <Title>Senior Strategic Partner Manager, Solutions</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Strategic Partner Manager, Solutions to join our team. As a key member of our Partner organization, you will play a critical role in building a global Solutions Partner Program that equips ZoomInfo&#39;s top implementation partners to deliver successful project outcomes and ensure ongoing customer adoption.</p>
<p>Your primary responsibilities will include designing and managing the global Solutions Partner Program and methodology, working with the partner team, delivery and account teams to ensure the proper program elements, resources, and processes are in place to support solutions partner&#39;s success, providing full program strategy, project management and timely updates on key solutions partner success initiatives, and being the trusted advisor for ZoomInfo solutions partners.</p>
<p>You will also work collaboratively with Partners and internal ZoomInfo delivery and technical experts to develop repeatable frameworks built from successful customer deployments, collect qualitative and quantitative data points to measure and report on individual partner performance based upon key metrics (KPIs) to ensure a high standard of implementation quality from our top partners, and play an active role in contributing to the evolution of ZoomInfo’s overall partner program and strategy.</p>
<p>Additionally, you will architect and manage partner business planning, QBRs, assessments, etc., own and manage partner interactions with ZoomInfo Team (Marketing, Product, Pre-Sales, Sales, Services, Enablement, Customer Success, Partner Operations, and Executive Leadership), handle administrative functions related to Partner Account and ensure internal tools are updated and sales hygiene is maintained, and support Partners and internal stakeholders’ ad-hoc requests and jump in where needed.</p>
<p>Core systems and tools you may be working with include Salesforce, Jira, Confluence, GSuite, Netsuite, Snowflake, GCP, AWS, plus multiple other peripheral software tools in these ecosystems.</p>
<p>Requirements include 5-8 years of experience working with and handling solutions partners, confirmed track records of sales over-performance, existing SI partner relationships and network, strategic business and marketing planning capabilities, excellent interpersonal skills and a confirmed capacity to build positive relationships and close business with partners, proven ability to work cross-functionally, and self-motivation, strong self-management skills, and leadership qualities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$105,350-$165,550 USD</Salaryrange>
      <Skills>Sales, Partner Management, Program Management, Project Management, Strategic Planning, Business Development, Marketing, Product, Pre-Sales, Services, Enablement, Customer Success, Partner Operations, Executive Leadership, Salesforce, Jira, Confluence, GSuite, Netsuite, Snowflake, GCP, AWS</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a provider of go-to-market intelligence solutions, with a platform that offers best-in-class technology paired with unrivaled data coverage, accuracy, and depth of contacts, companies, and opportunities.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8441280002</Applyto>
      <Location>Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b244f27-9fd</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>
<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>
<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>
<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>
<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>
<p>Travel to customers 20% of the time.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461258002</Applyto>
      <Location>Raleigh, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a1df8fb-ff4</externalid>
      <Title>Principal Engineer, Fin AI Agent</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Engineer to join our AI Group in Berlin. As a Principal Engineer, you will be responsible for leading the development of our Fin AI agent, which is the #1 AI agent for customer service. You will partner at the strategic pillar level, having broad context across work streams and using that to inform technical strategy and investment priorities.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Partnering at the strategic pillar level to inform technical strategy and investment priorities</li>
<li>Spinning up 0-to-1 work streams, bringing together engineers who&#39;ve never worked as a team, disambiguating the problem space, building momentum under aggressive timelines, setting high expectations, and driving execution</li>
<li>Executing on the most ambiguous, highest-stakes problems, writing code, shipping features, and being deep in the weeds</li>
<li>Leading experimental work at the AI frontier, running your own A/B tests, doing prompt engineering, building evals, and calibrating accuracy, cost, and latency for LLM-powered features</li>
<li>Shaping long-term technical strategy through execution, building and thinking about what needs to change about how we build products – data models, system design, the shift from GUI-first to agent-first interfaces</li>
<li>Working across the full stack in an AI-first development environment, pushing the boundaries of what&#39;s possible with AI-assisted development and helping shape how the entire engineering org works</li>
<li>Raising the bar for the people around you, giving direct, actionable feedback that changes outcomes</li>
</ul>
<p>We&#39;re looking for someone with:</p>
<ul>
<li>Engineering depth and product thinking, combining deep engineering ability with strong product and design instincts</li>
<li>Experience operating at real scale and having builder energy, with a bias toward building over discussing</li>
<li>AI fluency, actively experimenting with AI-assisted development and pushing the boundaries of what&#39;s possible</li>
<li>Deep technical depth with breadth, navigating complex multi-team systems with ease</li>
<li>Communication as a superpower, explaining to leadership why a technical investment matters, aligning multiple teams around a complex project, and walking an engineer through the gnarly implementation details</li>
<li>Extreme autonomy, partnering with the Engineering Director on where you think the pillar needs to go next</li>
<li>Critical thinking about the business, understanding what Intercom is optimizing for and translating that into technical decisions</li>
<li>At least 10+ years of experience, with significant time as a technical leader driving complex projects across multiple teams and stakeholders</li>
<li>Stack agnostic, with experience working with Ruby on Rails, React, and AWS, and being fluent with AI-assisted development tools like Claude Code</li>
</ul>
<p>If you&#39;re looking for a challenging role that will push you to grow and develop as an engineer, and you&#39;re passionate about AI and customer service, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, React, AWS, AI-assisted development, Claude Code, LLM-powered features, A/B testing, Prompt engineering, Evals, Accuracy, Cost, Latency</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is a customer service company that provides AI-powered solutions for businesses. It was founded in 2011 and has a significant customer base.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7725837</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4daeb1d2-f04</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our team in Vancouver. As a fullstack software engineer, you will work with your team and product management to make insights from data simple. You&#39;ll set the foundation for how we build robust, scalable, and delightful products.</p>
<p>Our customers increasingly use Databricks to analyze petabyte-scale logs in real time. This creates new challenges across the entire data processing pipeline, including ingestion, indexing, processing, and the user experience itself. Our customers are also using Databricks to launch AI/BI, which is redefining Business Intelligence for the AI age. We have several open roles across the teams below:</p>
<ul>
<li>Log Analytics: Our customers increasingly use Databricks to analyze petabyte-scale logs in real time.</li>
<li>AI/BI: AI/BI is redefining Business Intelligence for the AI age.</li>
<li>Unity Catalog Business Semantics: Context is everything for AI. For enterprise data, that context needs to be governed and managed, which is what Unity Catalog Business Semantics offers.</li>
<li>Databricks Apps: Databricks Apps is one of the fastest growing products at Databricks, used by more than 2,500 customers who have created more than 20,000 apps.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience with HTML, CSS, and JavaScript.</li>
<li>Passion for user experience and design and a deep understanding of front-end architecture.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Motivated by delivering customer value.</li>
<li>Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember).</li>
<li>5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go).</li>
<li>Good knowledge of SQL.</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes.</li>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. Canada Pay Range $146,200-$201,100 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099342002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>374022f0-c2a</externalid>
      <Title>Senior Software Engineer, Infrastructure - Platform Compute</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Software Engineer, Infrastructure - Platform Compute to join our team.</p>
<p>As a member of our Platform Product Group, you will be responsible for building a trusted, scalable, and compliant platform to operate with speed, efficiency, and quality.</p>
<p>Our teams build and maintain the platforms critical to the existence of Coinbase.</p>
<p>The Compute team builds and operates the Kubernetes platform at Coinbase, which is the primary compute orchestration infrastructure for services at Coinbase.</p>
<p>You will work towards continuously improving the scalability, reliability, efficiency, and operational experience of using Kubernetes at Coinbase, working closely with the Routing, Security, Reliability, and Observability teams (among many others).</p>
<p>Responsibilities:</p>
<ul>
<li>Build tooling and automation to make management of our Kubernetes clusters easy and reliable.</li>
</ul>
<ul>
<li>Build tooling and automation to improve the developer and operational experience of working with Kubernetes for all users.</li>
</ul>
<ul>
<li>Operationalize our Kubernetes platform so that it continues to be automated and self-healing to prevent unnecessary oncall burden.</li>
</ul>
<ul>
<li>Develop net-new Kubernetes-related capabilities for service owners at Coinbase (e.g. one off jobs, cron, different deployment strategies, support for EFS, automated right sizing).</li>
</ul>
<ul>
<li>Support our customers as they operate critical services for Coinbase in Kubernetes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 5+ years of software engineering experience and experience with Kubernetes, or similar compute orchestration systems (e.g. mesos, nomad)</li>
</ul>
<ul>
<li>Strong AWS and/or GCP infrastructure knowledge</li>
</ul>
<ul>
<li>Ability to build backend services in addition to infrastructure</li>
</ul>
<ul>
<li>Ability to hold a high bar for quality, are a self-starter, and have strong interpersonal skills</li>
</ul>
<ul>
<li>Strong problem-solving skills and ability to identify problems, determine their root cause, and see them through to solution</li>
</ul>
<ul>
<li>Ability to balance business needs with technical solutions</li>
</ul>
<ul>
<li>Has experience scaling backend infrastructure</li>
</ul>
<p>Job #: P74890</p>
<p>*Answers to crypto-related questions may be used to evaluate your on-chain experience.</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$186,065-$218,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$186,065-$218,900 USD</Salaryrange>
      <Skills>Kubernetes, AWS, GCP, Software engineering, Compute orchestration, Automation, Backend services, Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet platform.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7576764</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b13be8f-8b4</externalid>
      <Title>Product Engineer</Title>
      <Description><![CDATA[<p>At Intercom, you will be a product engineer - someone who solves real customer problems through a smart and efficient application of your technical knowledge and your tools. You’ll be part of one of our multidisciplinary product teams, where you will build both back-end and front-end systems, and work closely with designers, product managers, researchers, and data analysts.</p>
<p>We’re facing many exciting scaling challenges and we’re building a robust platform where your expertise can be applied to areas such as building a beautiful messenger composer, rule matching, deliverability, security, app availability and machine learning, to name a few.</p>
<p>As an experienced engineer you will:</p>
<p>Develop technical plans and contribute to our technical architecture as we scale our products to serve tens of millions of people every day.</p>
<p>Write Ruby code, which knits together a lot of AWS, infrastructure, platform and SaaS technologies that form the core of Intercom’s backend infrastructure</p>
<p>Ship a change to production on your first day and a feature in your first week. That “day one” change is automatically deployed to production along with 100 other deployments (on average) each weekday.</p>
<p>Build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>
<p>Grow your team’s capacity by mentoring other engineers and interviewing candidates. This is a chance to be an integral part of building and growing a team.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary and equity in a fast-growing start-up</p>
<p>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Pension scheme &amp; match up to 4%</p>
<p>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</p>
<p>Flexible paid time off policy</p>
<p>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</p>
<p>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, AWS, infrastructure, platform, SaaS technologies, high-level programming language, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6810055</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>262aa1cb-01c</externalid>
      <Title>Head of Corporate Engineering</Title>
      <Description><![CDATA[<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>
<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>
<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>
<p>Responsibilities:</p>
<p>In this influential role, you will be responsible for:</p>
<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>
<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>
<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>
<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>
<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>
<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>
<p>Desired Success Outcomes</p>
<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>
<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>
<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>
<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>
<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>
<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>
<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>
<p>Qualifications:</p>
<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>
<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>
<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>
<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>
<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>
<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>
<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $265,000-$364,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,000-$364,300 USD</Salaryrange>
      <Skills>Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7293607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a38ec886-62e</externalid>
      <Title>AI Engineer - FDE (Forward Deployed Engineer)</Title>
      <Description><![CDATA[<p>Mission</p>
<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications.</p>
<p>We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team.</p>
<p>This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote.</p>
<p>The impact you will have:</p>
<ul>
<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>
</ul>
<ul>
<li>Own production rollouts of consumer and internally facing GenAI applications</li>
</ul>
<ul>
<li>Serve as a trusted technical advisor to customers across a variety of domains</li>
</ul>
<ul>
<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>
</ul>
<ul>
<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>
</ul>
<ul>
<li>Minimum of 5+ years of relevant experience as a Data Scientist preferably working in a consulting role</li>
</ul>
<ul>
<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>
</ul>
<ul>
<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>
</ul>
<ul>
<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>
</ul>
<ul>
<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>
</ul>
<ul>
<li>Passion for collaboration, life-long learning, and driving business value through AI</li>
</ul>
<ul>
<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099751002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eed925a1-b05</externalid>
      <Title>Sr. Staff/ Staff Backline Technical Solution engineer</Title>
      <Description><![CDATA[<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a Backline Technical Solutions Engineer, you will help our customers succeed with the Databricks platform by resolving complex technical customer escalations and working closely with the frontline support team.</p>
<p>Your responsibilities will include: Troubleshooting and resolving complex customer issues related to the Databricks Platform by analysing core component metrics and logs. Providing suggestions and best practice guidance for improving performance in customer-specific environments and providing product improvement feedback. Helping the support team with detailed troubleshooting guides and runbooks. Contributing to automation and tooling programs to make daily troubleshooting efficient. Partnering with the engineering team and spreading awareness of upcoming features and releases. Identifying and contributing supportability features back into the product. Demonstrating ownership and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests. Participating in weekend and weekday on-call rotation.</p>
<p>We look for candidates with 12+ years of industry experience, expertise in scripting using Python or Shell, and comfort with black box troubleshooting. Experience with supporting Java, Scala or Python based applications, distributed big data computing environments, SQL-based database systems, Linux and network troubleshooting, and cloud services such as AWS, Azure or GCP is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, Python, Shell, Distributed Big Data Computing, SQL-based Database Systems, Linux, Network Troubleshooting, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8375176002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10290548-1ea</externalid>
      <Title>Solutions Architect - Public Sector (LEAPS)</Title>
      <Description><![CDATA[<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>
<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>
<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data engineering, data analytics, and data science and machine learning.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8320126002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2aff6a46-3ea</externalid>
      <Title>Manufacturing Software Engineer, Intelligence Systems</Title>
      <Description><![CDATA[<p>As a Software Engineer in the Manufacturing Test organization, you will join a software development team tasked to ensure that we build quality products - in land, sea, and air. You will develop test executive software that can systematically and thoroughly test our products and create analytics to improve our development cycle. You will champion automation, and work to reduce operator time and instruction complexity through the use of parallel execution, data acquisition, automated deployment tools. You will be presented complex, multiplatform problems with heavy reliance on cloud data systems. In this role you’ll need to think creatively and continuously improve our methods of automation, throughput, user interfaces, and data analytics.</p>
<p>This role will be based temporarily at Santa Ana, CA for a 3 month training period before transitioning to Asheville, OH.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop applications for Windows and Linux desktop environments</li>
<li>Integrate cloud data and deployment features while maintaining user authentication and security</li>
<li>Generate automation scripts (python) for debug and prototype development</li>
<li>Triage issues, root cause failures, and coordinate next-steps</li>
<li>Partner with end-users to turn needs into features while balancing user experience with engineering constraints</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Expertise in desktop application development with WPF and C#</li>
<li>Proficient in ASP.NET, RESTful services with C# in AWS/Azure infrastructure</li>
<li>Hands-on working knowledge of a major relational database (DB2, SQL Server etc.) and/or NoSql</li>
<li>Experience working in CI/CD and designing and delivering DevOps automation for app deployment and testing</li>
<li>Bachelor’s degree in Computer Science, Computer Engineering, or related field</li>
<li>Experience working on multi-disciplinary projects, working closely with Electrical / Mechanical / Manufacturing Engineers</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>5+ years of relevant industry experience</li>
<li>Pursuing a Master’s of Computer Science or related field</li>
<li>Experience with test automation or cloud deployment tools</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>desktop application development, WPF, C#, ASP.NET, RESTful services, AWS/Azure infrastructure, relational database, NoSql, CI/CD, DevOps automation, test automation, cloud deployment tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops advanced sensors and software for various industries.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5080387007</Applyto>
      <Location>Ashville, Ohio, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e22b8bd1-f7a</externalid>
      <Title>Staff Product Manager, Serverless Workspaces</Title>
      <Description><![CDATA[<p>At Databricks, we are building the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. The Serverless Workspaces team is the engine behind Databricks&#39; shift from a &#39;configure-first&#39; to a &#39;use-now&#39; platform. We are redefining the customer onboarding experience by removing the heavy lifting of cloud infrastructure without complicated networking, storage, and cluster configuration, just instant access to data and AI.</p>
<p>You will own the strategy for this next-generation platform layer, balancing the simplicity of a SaaS experience with the control enterprise customers demand. The impact you will have:</p>
<ul>
<li>Drive the transition to Serverless: Lead the strategy to unify the journey to onboard to serverless and classic workspaces and drive 10X usage of serverless in the next year</li>
<li>Democratize Workspace Creation: Design and ship flows that allow users to spin up workspaces instantly with little friction while maintaining strict governance guardrails and company policies</li>
<li>Redefine the &#39;Getting Started&#39; experience: Lower the barrier to entry by removing the requirement for customers to manage detailed cloud infrastructure configurations before using Databricks but allowing them dial those in when they&#39;re ready</li>
<li>Solve &#39;Workspace Proliferation&#39;: Help define the tools and policies that allow Admins to confidently govern increased amounts of workspaces across the enterprise</li>
<li>Unify the Data Estate: Work closely with the Unity Catalog and Identity teams to ensure that these new serverless environments seamlessly integrate with a customer&#39;s existing data and security models</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience as a Product Manager working on cloud infrastructure, developer platforms, or SaaS foundations</li>
<li>Technical depth in Cloud Infrastructure: Familiarity with AWS, Azure, or GCP resource management (e.g. networking, compute, identity) and how to abstract that complexity for end-users</li>
<li>Passion for simplification: A track record of taking complex technical workflows (like configuring a VPC or peering) and turning them into &#39;one-click&#39; consumer-grade experiences</li>
<li>Data-driven mindset: Comfortable defining and tracking KPIs, such as &#39;Time to First Workspace&#39; or &#39;Serverless Adoption Rate,&#39; to measure success</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Cloud Infrastructure, Developer Platforms, SaaS Foundations, AWS, Azure, GCP, Networking, Compute, Identity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8420607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e0058690-78c</externalid>
      <Title>Senior Software Engineer, GenAI Platform</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a large-scale GenAI Platform at Reddit.</p>
<p>The Machine Learning Platform team at Reddit is a high-impact team that owns the infrastructure that powers recommendations, content discovery, user and content quantification, while directly impacting other teams such as Growth, Ads, Feeds, and Core Machine Learning teams.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</li>
<li>Designing and developing ML and Generative AI systems in cloud-based production environments at scale.</li>
<li>Building and managing enterprise-grade RAG applications using embeddings, vector search, and retrieval pipelines.</li>
<li>Implementing and operationalizing agentic AI workflows with tool use using frameworks such as LangChain and LangGraph.</li>
<li>Driving adoption of MLOps / LLMOps practices, including CI/CD automation, versioning, testing, and lifecycle management.</li>
<li>Establishing best practices for observability, monitoring, evaluation, and governance of GenAI pipelines in production.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>5+ years of experience in ML Engineering, AI Platform Engineering, or Cloud AI Deployment roles.</li>
<li>Experience operating orchestration systems such as Kubernetes at scale.</li>
<li>Deep experience with cloud-based technologies for supporting an ML platform, including tools like AWS, Google Cloud Storage, infrastructure-as-code (Terraform), and more.</li>
<li>Proficiency with the common programming languages and frameworks of ML, such as Go, Python, etc.</li>
<li>Excellent communication skills with the ability to articulate technical AI concepts to non-technical stakeholders.</li>
<li>Strong focus on scalability, reliability, performance, and ease of use.</li>
</ul>
<p>Benefits include comprehensive healthcare benefits, income replacement programs, 401k with employer match, global benefit programs, family planning support, gender-affirming care, mental health &amp; coaching benefits, flexible vacation &amp; paid volunteer time off, and generous paid parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,800-$267,100 USD</Salaryrange>
      <Skills>ML Engineering, AI Platform Engineering, Cloud AI Deployment, Kubernetes, AWS, Google Cloud Storage, Terraform, Go, Python, LangChain, LangGraph</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7753480</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5196c4ac-d97</externalid>
      <Title>Senior Software Engineer - Infrastructure and Tools</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>
<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>
<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>
<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6318503002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>acef3d4c-b32</externalid>
      <Title>Security Engineer, Product Security</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>
<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, AWS, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>
<p>You will:</p>
<ul>
<li>Leverage broad product security expertise to build and maintain software tooling that secures every layer of the modern AI/ML software ecosystem.</li>
</ul>
<ul>
<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>
</ul>
<ul>
<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>
</ul>
<ul>
<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>
</ul>
<ul>
<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>
</ul>
<ul>
<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>
</ul>
<ul>
<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>
</ul>
<ul>
<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>
</ul>
<ul>
<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Demonstrated ability to drive multi-month security initiatives independently, from problem definition through execution, without requiring significant direction.</li>
</ul>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
</ul>
<ul>
<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>
</ul>
<ul>
<li>Strong understanding of modern Javascript application design.</li>
</ul>
<ul>
<li>Production experience operating and securing AWS infrastructure at scale.</li>
</ul>
<ul>
<li>Hands-on experience with SAST and DAST tools and methodologies.</li>
</ul>
<ul>
<li>Familiarity with terraform orchestration for infrastructure management.</li>
</ul>
<ul>
<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>
</ul>
<ul>
<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>
</ul>
<ul>
<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>TypeScript, Python, AWS, CI/CD, SAST, DAST, Terraform, NodeJS, Kubernetes, Modern Javascript application design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4643029005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25cacbc0-046</externalid>
      <Title>Senior Analyst, Legal Operations</Title>
      <Description><![CDATA[<p>We are seeking a skilled Legal Operations Senior Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>
<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>
<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>
<p>Responsibilities:</p>
<ul>
<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>
</ul>
<ul>
<li>Handle global legal information and content removal requests, including document intake and processing.</li>
</ul>
<ul>
<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>
</ul>
<ul>
<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>
</ul>
<ul>
<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>
</ul>
<ul>
<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>
</ul>
<ul>
<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>
</ul>
<ul>
<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>
</ul>
<ul>
<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>
</ul>
<ul>
<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>
</ul>
<ul>
<li>You may represent X in witness testimony or other external engagements.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>5+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>
</ul>
<ul>
<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>
</ul>
<ul>
<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>
</ul>
<ul>
<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>
</ul>
<ul>
<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>
</ul>
<ul>
<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>
</ul>
<p>Preferred Skills and Qualifications:</p>
<ul>
<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>
</ul>
<ul>
<li>Comfort with recording audio or video sessions for data collection.</li>
</ul>
<ul>
<li>Familiarity with AI workflows in a technical setting.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws</Skills>
      <Category>Legal</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090690007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9af8d812-df8</externalid>
      <Title>AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>
<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>
<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>
<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>
<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>
<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>
<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>
<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>
<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>
<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>
<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>
<p>You have hands-on experience with one or more of the following:</p>
<p>Model training (especially transformers and LLMs).</p>
<p>Model inference at scale (again, especially transformers and LLMs).</p>
<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>
<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>
<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>
<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>
<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary, annual bonus and equity</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>
<p>Generous paid time off above statutory minimum</p>
<p>Hybrid working</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p>Fun events for employees, friends, and family!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI company that builds customer service solutions. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7824142</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb18189c-d78</externalid>
      <Title>Solutions Architect (Pre-sales) - Kansai Region</Title>
      <Description><![CDATA[<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>
<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>
<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your prospects through evaluating and adopting Databricks</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>
<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>
<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>
<li>Experience working with Enterprise Accounts</li>
<li>Written and verbal fluency in Japanese</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437028002</Applyto>
      <Location>Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b6611499-8b7</externalid>
      <Title>AI Identity Architect</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI.\n\nOkta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.\n\nThis work requires a relentless drive to solve complex challenges with real-world stakes.\n\nWe are looking for builders and owners who operate with speed and urgency and execute with excellence.\n\nThis is an opportunity to do career-defining work.\n\nWe&#39;re all in on this mission.\n\nIf you are too, let&#39;s talk.\n\nThe Identity Team\n\nThe Identity team’s mission is to strengthen Okta’s position as the leading Identity-as-a-Service solution through identifying and resolving risks to the employees, product, and most importantly, our customers.\n\nWith the ever-increasing pace of cloud application adoption, companies are struggling to find ways to accurately assess risk and act at the speed of their business.\n\nThe AI Identity Architect Opportunity\n\nReporting to the VP of Identity &amp; Access Management, this role will be an AI Identity Pioneer, not just an IAM expert.\n\nYour &quot;been there, done that&quot; experience in securing autonomous agents at scale is your superpower.\n\nYou’ve seen how traditional OAuth flows break under agentic pressure, you’ve felt the pain of &quot;Secret Zero&quot; in a LangChain loop, and you know exactly where the industry’s current tools fall short.\n\nAt Okta, you won&#39;t just implement security; you will use your battle-tested experience to drive the product features needed to secure the next generation of identities.\n\nThe AI Identity Architect&#39;s mission is to own Okta’s enterprise identity strategy for autonomous AI agents.\n\nAs Customer Zero, you will implement Okta on Okta,validating identity patterns at production scale, feeding direct input into product roadmaps, and partnering with business units building internal agentic systems.\n\nWhat you’ll be doing\n\nProduct Vision &amp; Architecture (The &quot;Ratified R0&quot;)\n\nDrive the Roadmap: Act as a primary stakeholder for Okta’s product teams.\n\nTranslate your real-world experience securing agents into prioritized feature requests and product requirements.\n\nTarget State: Define a multi-year roadmap for Non-Human Identities (NHIs) and AI Agents aligned with Zero Trust (NIST 800-207) and Okta’s Secure Identity Commitment.\n\nPosture First: Use ISPM (Identity Security Posture Management) to discover unmanaged AI agents and eliminate &quot;Identity Debt&quot; across the enterprise.\n\nCross-App Access &amp; Brokered Delegation\n\nAgent-to-App Connectivity: Architect secure Cross-App Access patterns where agents act as intermediaries between enterprise systems.\n\nDelegated Authority: Refine how user identity is &quot;brokered&quot; to an agent (e.g. OAuth2 Token Exchange), ensuring the agent never has more power than the human user who triggered it.\n\nSession Scoping: Implement context-bound, short-lived tokens to prevent lateral movement by a compromised agent.\n\nOkta Customer Zero -- Validate and publish patterns using Okta primitives to secure the AI lifecycle for:\n\nOkta Identity Engine &amp; Auth0: Define how AI agents prove their identity within AuthN/AuthZ core concepts, implementing rigorous protocols for secure access delegation like OAuth2/OIDC, mTLS, and SPIFFE/SPIRE for workload attestation.\n\nOkta Privilege Access: Implement JIT/JEA access and ephemeral, vaulted secrets for agent tool-use.\n\nOkta Identity Governance &amp; Workflows: Automate the Joiner-Mover-Leaver (JML) lifecycle for agents, including automated certification and revocation.\n\nFine-Grained Authorization: Implement ReBAC for intent-bound decisions (e.g., &quot;Can this agent access the Finance API on behalf of the CFO?&quot;).\n\nServe as &quot;Customer Zero&quot; by architecting and stress-testing internal AI security frameworks, translating real-world deployment lessons into a continuous stream of public-facing white papers, blogs, and technical guides to steer industry best practices.\n\nAI Ecosystem &amp; Tech Stack Integration\n\nDefine how Okta identity is woven into modern AI orchestration layers:\n\nOrchestration: Secure identity patterns such as LangChain, LangGraph, AutoGPT, CrewAI, LlamaIndex, and Semantic Kernel.\n\nArchitect secure connectivity to AI model providers such as Azure OpenAI, AWS Bedrock, Google Vertex AI, OpenAI API, and Anthropic.\n\nWhat you’ll bring to the role\n\nThe &quot;Been There&quot; Factor: Proven track record of securing AI agents and non-human identities in a production environment.\n\nExperience: 7+ years in IAM/Security Architecture; proven strategy work across workforce, customer, and Non-Human Identities (NHIs).\n\nDeep knowledge of the core protocols OAuth2/OIDC (especially Token Exchange), SAML, mTLS, JWT, and Model Context Protocol (MCP).\n\nHands-on experience with Modern Identity framework SPIFFE/SPIRE.\n\nAbility to author Architecture Decision Records (ADR) and influence at the VP/CTO level, while simultaneously acting as a peer to Product Management.\n\nAnd extra credit if you have experience in any of the following!\n\nPrior work shaping identity strategy for autonomous/agent systems, multi-agent delegation, or brokered access patterns.\n\nExposure to policy-as-code (OPA/Cedar) and service-mesh identity.\n\nCertifications such as CISSP-ISSAP, CCSP, or TOGAF are welcome but not required or expected.\n\n#LI-SM1 #LI-Hybrid P21621_3398002</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$242,000-$332,000 USD</Salaryrange>
      <Skills>OAuth2/OIDC, SAML, mTLS, JWT, Model Context Protocol (MCP), SPIFFE/SPIRE, Architecture Decision Records (ADR), Policy-as-code (OPA/Cedar), Service-mesh identity, LangChain, LangGraph, AutoGPT, CrewAI, LlamaIndex, Semantic Kernel, Azure OpenAI, AWS Bedrock, Google Vertex AI, OpenAI API, Anthropic</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity management solutions for businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7749222</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd02d1a1-0e8</externalid>
      <Title>Communications Lead, Claude Code</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Communications Lead to own comms for Claude Code. You&#39;ll sit on the Product Communications team, working day-to-day with the Claude Code product team, developer relations, and marketing.</p>
<p>The media landscape for developer tools doesn&#39;t look like it did five years ago. We need someone who understands both traditional press and the channels where developers form opinions. You might have come up through an in-house comms team, or you might have run launches inside product marketing, handled press from a DevRel role, or found your way to this work from somewhere adjacent.</p>
<p>You should be a Claude Code user yourself and know the product well.</p>
<p>Responsibilities:</p>
<ul>
<li>Own communications for Claude Code, from the big launches to the steady rhythm of updates, community moments, and everything in between</li>
<li>Build and maintain strong relationships with journalists, newsletter writers, podcasters, and creators covering dev tools and the AI ecosystem</li>
<li>Lead cross-functional product launch communications, coordinating messaging across comms, marketing, developer relations, and product</li>
<li>Advise leadership and DevRel when things move fast or catch fire, whether it’s an incident or a community thread</li>
<li>Translate complex technical work into stories that land with developers and still make sense to broader audiences</li>
<li>Develop messaging frameworks and content strategies that work across technical and non-technical audiences</li>
<li>Prepare Claude Code engineers and product leads for external moments: podcasts, talks, press, etc.</li>
<li>Think across channels (press, social, community, owned) and know which lever to pull for each moment</li>
<li>Pay attention to what&#39;s actually working and build the program from there</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8–12 years of experience in communications, PR, or developer marketing, with meaningful time focused on technical products or developer audiences</li>
<li>Use Claude Code heavily and can talk specifically about how you use it in your day-to-day</li>
<li>Are high-agency and low-ego, with a bias to action</li>
<li>Write clearly and concisely, whether it&#39;s a launch post or a cross-functional update, a lot of context moves through this role and people need to be able to follow it</li>
<li>Have a deep understanding of both traditional media channels and the emerging platforms where technical communities engage</li>
<li>Are very online, follow the right people, know what&#39;s moving through Hacker News and developer social chatter, and catch things early</li>
<li>Have real fluency in developer culture and know how trust gets earned there</li>
</ul>
<p>Strong candidates may also</p>
<ul>
<li>Have experience at developer tools companies, infrastructure products, or open source projects</li>
<li>Have an existing network in developer media, technical journalism, or the creator space</li>
<li>Have experience managing communications for AI or ML products</li>
</ul>
<p>The annual compensation range for this role is $185,000-$255,000 USD.</p>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000-$255,000 USD</Salaryrange>
      <Skills>communications, PR, developer marketing, technical products, developer audiences, AI, ML, GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, Learning from Human Preferences</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153586008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65befd80-0e2</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Staff-level backend software engineer to join our Live Pay team. You&#39;ll work cross-functionally with various teams and contribute to the design and development of key platform services. This person must be strong in JVM languages and event-driven architecture on AWS.</p>
<p>The Canada base salary range for this full-time position is $252,000-$308,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. This role will be hybrid from our Vancouver, CAN office, with 2 days a week in the office required.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the design and implementation of new features. Break down complex problems into their bare essentials, translate this complexity into elegant design, and create high-quality, clean code.</li>
</ul>
<ul>
<li>Make a meaningful impact on the lives of our community members.</li>
</ul>
<ul>
<li>Design, develop, and deliver large-scale systems.</li>
</ul>
<ul>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
</ul>
<ul>
<li>Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success.</li>
</ul>
<ul>
<li>Estimate and manage team project timelines and risks.</li>
</ul>
<ul>
<li>Care passionately about producing high-quality, efficient designs and code.</li>
</ul>
<ul>
<li>Constantly learning about new technologies and industry standards in software engineering.</li>
</ul>
<ul>
<li>Work cross-functionally with other teams, including: Analytics, design, product, marketing, and data science.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of development experience in backend software development</li>
</ul>
<ul>
<li>Bachelor&#39;s, Master’s, or PhD in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
</ul>
<ul>
<li>Proficiency in at least one modern programming language, such as Java, Kotlin, Scala, or C#, and experience with at least one major framework such as Spring, Spring Boot, or ASP.NET Core.</li>
</ul>
<ul>
<li>Hands-on experience working in cloud environments: AWS, GCP, or Azure</li>
</ul>
<ul>
<li>Proficiency in event-driven systems such as Kafka, SQS, SNS, or Kinesis, and experience designing and operating scalable distributed systems.</li>
</ul>
<ul>
<li>Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations</li>
</ul>
<ul>
<li>Hands-on experience working with various databases. DynamoDB, MySQL, ElasticSearch</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) to improve engineering productivity</li>
</ul>
<ul>
<li>Experience with continuous integration and delivery tools, and experience in developing and executing functional and integration tests.</li>
</ul>
<ul>
<li>Familiarity with a clean architecture approach and software craftsmanship</li>
</ul>
<ul>
<li>Experience with Kubernetes and microservice architecture is a strong plus.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$252,000-$308,000</Salaryrange>
      <Skills>Java, Kotlin, Scala, C#, Spring, Spring Boot, ASP.NET Core, AWS, GCP, Azure, Kafka, SQS, SNS, Kinesis, DynamoDB, MySQL, ElasticSearch, AI-assisted development tools, Continuous integration and delivery tools, Clean architecture approach, Software craftsmanship, Kubernetes, Microservice architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, delivering real-time financial flexibility for individuals living paycheck to paycheck. It has a healthy core business with a significant runway.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7680387</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0373d52-7fe</externalid>
      <Title>Senior IAM Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior IAM Engineer to join our team. As a Senior IAM Engineer, you will play a critical role in securing our systems and data. You will have the opportunity to work with cutting-edge IAM technologies, collaborate with cross-functional teams, and influence the development of our IAM strategy.</p>
<p>Your primary focus will be on designing and implementing identity lifecycle management, integration and orchestration, access governance, security and compliance, custom tooling, and data and AI infrastructure support. You will also be responsible for collaborating with cross-functional teams, improving provisioning and deprovisioning processes, integrating and managing IdPs within the IAM system, handling and streamlining access requests, developing and implementing IAM policies and procedures, and responding to ad-hoc requests.</p>
<p>To be successful in this role, you will need to have a strong understanding of identity lifecycle management, directory services, SSO, MFA, SCIM provisioning, and federation (SAML, OIDC, OAuth). You will also need to have experience partnering with HR, Finance, Compliance, and other cross-functional teams to design and implement IAM and enterprise solutions.</p>
<p>Additional skills and experience we&#39;d prioritize include experience with Workato or similar integration orchestrator tools, experience with Okta Workflows, certifications such as Workato or Okta Certified Professional/Administrator/Consultant, experience integrating IAM with HR systems, knowledge of compliance requirements related to IAM, and background in cloud platforms (AWS, GCP, Azure) and IAM integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scripting, Automation Mindset, APIs, Infrastructure as Code, Security Mindset, Identity and Access Management, Okta, Workday, Google Workspace, SCIM provisioning, Federation (SAML, OIDC, OAuth), Directory services, SSO, MFA, Workato, Okta Workflows, Certifications (Workato or Okta Certified Professional/Administrator/Consultant), Experience integrating IAM with HR systems, Knowledge of compliance requirements related to IAM, Background in cloud platforms (AWS, GCP, Azure) and IAM integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8393728002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c00acb7b-2dd</externalid>
      <Title>CX Tooling Specialist</Title>
      <Description><![CDATA[<p>We&#39;re seeking a CX Tooling Specialist to join our Site Operations team, ensuring seamless operational efficiency, reliability, and user lifecycle management across Customer Care systems and adjacent tooling.</p>
<p>As a CX Tooling Specialist, you&#39;ll sit at the intersection of CX Operations, IT, Security, and Engineering, administering key platforms, driving intake and incident workflows, and maintaining audit/compliance rigor that keeps our tools secure and our teams productive.</p>
<p>Key responsibilities include:</p>
<p>Intake &amp; triage: owning intake and triage for Site Ops requests via our Jira Customer Portal, including general requests, tooling access, and after-hours urgent support; Implement agentic workflows: implementing end-to-end agentic workflows that autonomously intake, triage, and resolve routine Site Ops and CX issues; Tooling administration: administering and supporting CX tooling and adjacent applications, such as Zendesk/SunCo, AWS Connect, Sprout Social, Assembled, and Google Analytics; User lifecycle &amp; Okta hygiene: executing BPO and internal lifecycle operations, maintaining source-of-truth rosters and access records, and ensuring changes are auditable; Audits &amp; remediation: running recurring user/access audits across Site Ops-administered tools, documenting findings, and closing the loop with stakeholders and Security; Vendor escalations &amp; incident support: coordinating vendor escalations end-to-end, capturing context, performing initial troubleshooting, opening/tracking external tickets, and implementing fixes or workarounds with partners; Cross-functional change coordination: partnering with Engineering, IT, and Security on configuration changes, release gating, and change validation; Documentation &amp; enablement: creating and maintaining high-quality runbooks, SOPs, tooling pages, and training content for internal teams and BPO partners; Operational health &amp; telemetry: monitoring operational health signals across owned tools, proactively addressing reliability, access, and performance issues affecting agents or customers, and escalating via defined incident paths.</p>
<p>We&#39;re looking for someone with hands-on experience administering CX platforms, supporting production-grade support environments, and proficiency with operational intake and tracking in Jira. Experience with AWS Connect or similar cloud contact center platforms, building/implementing agentic workflows, and familiarity with learning and workforce platforms used in support orgs are preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Zendesk/SunCo, AWS Connect, Sprout Social, Assembled, Google Analytics, Jira, Okta, identity/access automation, scripting/automation, agentic workflows, learning and workforce platforms, operational reporting</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, providing real-time financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7766545</Applyto>
      <Location>Mexico City, Mexico</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35458586-a42</externalid>
      <Title>Enterprise Architect, Finance &amp; Legal Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Enterprise Architect to join our Technology, Data and Intelligence team. As an Enterprise Architect, you will be responsible for defining and delivering the technology architecture strategy across Finance and Legal functions, enabling data-driven decision-making, automation, and operational excellence.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Defining the target-state architecture for Finance and Legal applications, ensuring alignment with enterprise strategy and growth objectives.</li>
<li>Leading the design and implementation of end-to-end architectural solutions for Finance and Legal systems, ensuring integration, scalability, and performance across the enterprise.</li>
<li>Developing and maintaining a multi-year roadmap for modernization across ERP, FP&amp;A, Legal, and Sales Compensation systems.</li>
<li>Ensuring systems are designed with identity-first security principles, integrating with Okta and other IAM solutions for authentication, authorization, and compliance.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>15+ years of software engineering experience, including significant time as an Architect or Principal in ERP Systems (Oracle/Netsuite/SAP), FP&amp;A Systems (Anaplan) and/or CLM systems (Aptus/Conga/Ironclad).</li>
<li>Excellent storytelling and communication skills,comfortable presenting to both technical and executive stakeholders.</li>
<li>Multiple ERP (Oracle or Netsuite) full cycle implementation experience.</li>
<li>Deep understanding of the Finance business process areas – Order to Cash, Record to Report, Source to Pay, Plan to Report (FP&amp;A), Treasury, Credit Collection, Revenue Recognition, and Subscription Billing, Contract Life Cycle Mgmt within Legal Ops.</li>
<li>Demonstrated hands-on experience architecting functional and technical solutions within major business applications, with specific expertise in NetSuite (or Oracle), Aptus/Conga (or IronClad), Anaplan, Coupa, Scout, Tax engines such as Avalara, Vertex or OneSource – including understanding their data models and APIs in context of solution development and integrations.</li>
<li>Architected and delivered AI Agents using leading LLMs Gemini, OpenAI or Claude.</li>
<li>Experience with managing a Software and/or Vendor selection keeping in view the end state architecture of the enterprise.</li>
<li>Proficient understanding of middlewares such as MuleSoft, Workato, Boomi, or Informatica for connecting Finance, Legal, CRM, and data platforms.</li>
<li>Familiar with code, configuration, and system performance standards/reviews to ensure quality, scalability, and compliance with enterprise standards.</li>
<li>Proficiency with AWS, Azure, or GCP, with knowledge of data lakes/warehouses (Snowflake, Redshift, BigQuery) for SaaS revenue and compliance analytics.</li>
<li>Identity &amp; Security: knowledge of SSO, OAuth, SAML, SCIM, and Zero Trust principles, with hands-on integration experience in Okta or similar IAM platforms.</li>
</ul>
<p>In addition to the above skills and experience, the ideal candidate will be passionate about innovation, AI adoption, and continuous improvement aligned with Okta’s mission to build secure, intelligent, and connected business systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$150,000 - $250,000 per year</Salaryrange>
      <Skills>Enterprise Architecture, Cloud Computing, Identity and Access Management, Security, Data Analytics, Machine Learning, Artificial Intelligence, Software Development, DevOps, Agile Methodologies, AWS, Azure, GCP, Snowflake, Redshift, BigQuery, MuleSoft, Workato, Boomi, Informatica</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorisation services to organisations.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7442186</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b106bca-f53</externalid>
      <Title>Senior Product Engineer</Title>
      <Description><![CDATA[<p>At Intercom, you will be a product engineer - someone who solves real customer problems through a smart and efficient application of your technical knowledge and your tools.</p>
<p>You’ll be part of one of our multidisciplinary product teams, where you will build both back-end and front-end systems, and work closely with designers, product managers, researchers, and data analysts.</p>
<p>We’re facing many exciting scaling challenges and we’re building a robust platform where your expertise can be applied to areas such as building a beautiful messenger composer, rule matching, deliverability, security, app availability and machine learning, to name a few.</p>
<p>As an experienced engineer you will:</p>
<ul>
<li>Develop technical plans and contribute to our technical architecture as we scale our products to serve tens of millions of people every day.</li>
</ul>
<ul>
<li>Write Ruby code, which knits together a lot of AWS, infrastructure, platform and SaaS technologies that form the core of Intercom’s backend infrastructure</li>
</ul>
<ul>
<li>Ship a change to production on your first day and a feature in your first week. That “day one” change is automatically deployed to production along with 100 other deployments (on average) each weekday.</li>
</ul>
<ul>
<li>Build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</li>
</ul>
<ul>
<li>Grow your team’s capacity by mentoring other engineers and interviewing candidates.</li>
</ul>
<p>This is a chance to be an integral part of building and growing a team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, AWS, infrastructure, platform, SaaS technologies, Distributed systems, AI-powered developer tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is the AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7371932</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cdf3abc5-af5</externalid>
      <Title>Employment &amp; Litigation Counsel</Title>
      <Description><![CDATA[<p>Squarespace is looking for an experienced and pragmatic Employment &amp; Litigation Counsel to join its Legal team. As a key part of the team, you will assist in supporting the legal needs of the People (HR) team and other internal clients across U.S. and international locations and departments on a wide range of legal, compliance and business issues.</p>
<p>The ideal candidate is a solutions-oriented, business-minded litigator who exercises strong legal judgment and takes a practical approach to advising on complex issues, balancing compliance and business objectives in a fast-paced environment. You&#39;ll work hybrid 2-3 days a week from the NYC headquarters and report to the Senior Counsel, Employment &amp; Litigation.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Serve as the primary legal advisor to the People team, providing practical advice and day-to-day support on multi-jurisdictional employment law issues across hiring, performance management, leaves of absences, accommodations, diversity &amp; inclusion, terminations, workplace concerns, internal investigations, restrictive covenants, wage and hour, and privacy issues.</li>
</ul>
<ul>
<li>Consult, review and draft employment-related agreements (e.g., offers; engagement letters, separations, consulting terms).</li>
</ul>
<ul>
<li>Partner with the Senior Counsel to support a wide range of litigation, pre-litigation and regulatory matters across multiple disciplines.</li>
</ul>
<ul>
<li>Assist in fact finding and analysis, discovery, legal research, revising pleadings and negotiations.</li>
</ul>
<ul>
<li>Provide risk assessments and develop litigation strategies aligned with business priorities.</li>
</ul>
<ul>
<li>Support the development of templates, corporate policies and procedures to promote compliance with applicable laws and regulations.</li>
</ul>
<ul>
<li>Analyze judicial and legislative developments and assist internal teams in implementing strategies, training and policies/procedures to mitigate legal risk.</li>
</ul>
<ul>
<li>Collaborate with other teams within Legal and across Squarespace to support the successful execution of global projects and initiatives including by contributing to M&amp;A employment and benefits matters (diligence, offer documents, integration activities).</li>
</ul>
<ul>
<li>Identify potential areas of risk / opportunity, and collaborate with various People team functions and internal stakeholders to develop scalable, risk-based solutions that drive business outcomes.</li>
</ul>
<ul>
<li>Provide support on a variety of non-employment topics depending on your background and interest, for example, third-party litigation, regulatory matters and/or compliance.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>J.D. degree (or foreign equivalent); admitted to and in good standing with at least one U.S. state bar (New York preferred).</li>
</ul>
<ul>
<li>Minimum 5+ combined years of experience as a practicing attorney managing US employment law matters and/or litigation, with a corporate in-house legal department (high-growth company experience preferred) and/or top-tier law firm with experience providing guidance to public and private clients.</li>
</ul>
<ul>
<li>Strong knowledge of applicable US federal, state and local employment laws, with an interest in learning new areas of law and business.</li>
</ul>
<ul>
<li>Comfortable working in an international context, with experience supporting jurisdictions outside the US (Ireland, Portugal a plus).</li>
</ul>
<ul>
<li>A team player with a passion for technology and a commitment to innovation who thrives on collaboration with low ego, and can effectively work cross-functionally.</li>
</ul>
<ul>
<li>Excellent interpersonal, written and oral communication skills, and sound and clear business judgment, including the ability to distill complex legal issues to practical guidance.</li>
</ul>
<ul>
<li>Demonstrated flexibility and creativity in solving problems and dealing with ambiguity in a fast-paced environment.</li>
</ul>
<ul>
<li>Proven ability to independently manage projects and multitask, adjusting readily to multiple demands, shifting priorities and rapid change with composure.</li>
</ul>
<ul>
<li>Organized, diligent and pragmatic with a high attention to detail and a commitment to operational execution and efficiency.</li>
</ul>
<ul>
<li>Strong familiarity with e-discovery strategy / management is a plus.</li>
</ul>
<ul>
<li>Experience with executive compensation matters and/or sales commissions arrangements is a plus.</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>A choice between medical plans with an option for 100% covered premiums</li>
</ul>
<ul>
<li>Fertility and adoption benefits</li>
</ul>
<ul>
<li>Access to supplemental insurance plans for additional coverage</li>
</ul>
<ul>
<li>Headspace mindfulness app subscription</li>
</ul>
<ul>
<li>Global Employee Assistance Program</li>
</ul>
<ul>
<li>Retirement benefits with employer match</li>
</ul>
<ul>
<li>Flexible paid time off</li>
</ul>
<ul>
<li>12 weeks paid parental leave and family care leave</li>
</ul>
<ul>
<li>Pretax commuter benefit</li>
</ul>
<ul>
<li>Education reimbursement</li>
</ul>
<ul>
<li>Employee donation match to community organizations</li>
</ul>
<ul>
<li>7 Global Employee Resource Groups (ERGs)</li>
</ul>
<ul>
<li>Dog-friendly workplace</li>
</ul>
<ul>
<li>Free lunch and snacks</li>
</ul>
<ul>
<li>Private rooftop</li>
</ul>
<ul>
<li>Hack week twice per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$160,000 - $225,000 USD</Salaryrange>
      <Skills>US federal, state and local employment laws, Litigation and dispute resolution, Employment law and compliance, Corporate governance and risk management, International employment law and global HR, E-discovery strategy and management, Executive compensation and sales commissions arrangements, Business development and strategic planning, Leadership and team management, Communication and interpersonal skills</Skills>
      <Category>Legal</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 and is headquartered in New York City.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7757539</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ed46937-df6</externalid>
      <Title>Staff Developer Success Engineer - West</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>
<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>
<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>
<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>
<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>
<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>
<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed4bd662-c67</externalid>
      <Title>Senior Solutions Architect, Commercial - San Francisco</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>
<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>
<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>
<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>
<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>
<p>Must reside in San Francisco, CA</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000 - $250,000 OTE</Salaryrange>
      <Skills>Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that can simplify code, make applications more reliable, and help developers focus on the important things like delivering features faster. It is growing and building the team that will make that happen.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>20fef61c-c3c</externalid>
      <Title>Partner Solutions Engineer, UK&amp;I</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.</p>
<p>If you’re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you’ll fit right in.</p>
<p>Available Locations: London</p>
<p>About Solutions Engineering at Cloudflare</p>
<p>The Pre-Sales Solution Engineering organization owns the technical sale of the Cloudflare solution portfolio, ensuring maximal business value, fit-for-purpose solution design and adoption roadmap for our customers. Solutions Engineering is made up of individuals from a wide range of backgrounds - from Financial Consulting to Product Management, Customer Support to Software Engineering, and we are serious about building a diverse, experienced and curious team.</p>
<p>The Partner Solutions Engineer is an experienced PreSales role within the Solutions Engineering team. Partner Solutions Engineers work closely with our partners to educate, empower, and ensure their success delivering Cloudflare security, reliability and performance solutions.</p>
<p>What you&#39;ll do as a Partner Solutions Engineer</p>
<p>Your role will be to build passionate champions within the technology ranks at your Partner accounts, aid your Partner organizations to drive sales for identified opportunities, and collaborate with your technical champions to build revenue pipeline. As the technical partner advocate within Cloudflare, you will work closely with every team at Cloudflare, from Sales and Product, through to Engineering and Customer Support.</p>
<p>You have strong experience in large Pre-Sales partner and account management as well as excellent verbal and written communications skills in English, suited for both technical and executive-level engagement. You are comfortable speaking about the Cloudflare vision and mission with all technical and non-technical audiences. Ultimately, you are passionate about technology and have the ability to explain complex technical concepts in easy-to-understand terms.</p>
<p>You are naturally curious, and an avid builder who is not afraid to get your hands dirty. You appreciate the diversity of challenges in working with partners and customers, and look forward to helping them realize the full promise of Cloudflare.</p>
<p>On the Solutions Engineering team, you will find a collaborative environment where everyone brings different strengths and jumps in to help each other. Specifically, we are looking for you to:</p>
<ul>
<li>Build and maintain long term technical relationships with our EMEA partners to increase Cloudflare’s reputation and authority within the partner solution portfolio through demonstrating value, enablement, and uncovering new areas of potential revenue</li>
</ul>
<ul>
<li>Drive technical solution design conversations and guide partners in EMEA through use case qualification and collaborative technical wins through demonstrations and proofs-of-concepts</li>
</ul>
<ul>
<li>Evangelize and represent Cloudflare through technical thought leadership and expertise</li>
</ul>
<ul>
<li>Be the voice of the partner internally at Cloudflare, engaging with and influencing Cloudflare’s Product and Engineering teams to meet your partner and customer needs</li>
</ul>
<p>Travel up to 40% throughout the quarter to support partner engagements, attend conferences and industry events, and to collaborate with your Cloudflare teammates</p>
<p>Examples of desirable skills, knowledge and experience:</p>
<ul>
<li>Fluency in English (verbal and written)</li>
</ul>
<ul>
<li>Experience managing technical sales within large partners and accounts:</li>
</ul>
<ul>
<li>Developing champion-style relationships</li>
</ul>
<ul>
<li>Driving technical wins</li>
</ul>
<ul>
<li>Assisting with technical validation</li>
</ul>
<ul>
<li>Experience and expertise in one or more of the core industry components of Cloudflare solutions:</li>
</ul>
<ul>
<li>SASE concepts and Zero Trust Networking architectures</li>
</ul>
<ul>
<li>Networking technologies including TCP, UDP, DNS, IPv4 + IPv6, BGP routing, GRE, SD-WAN, MPLS, Global Traffic Management</li>
</ul>
<ul>
<li>Internet security technologies including DDoS and DDoS mitigation, Firewalls, TLS, VPN, DLP</li>
</ul>
<ul>
<li>Detailed understanding of workflow from user to application including hybrid architectures with Azure, AWS, GCP</li>
</ul>
<ul>
<li>HTTP technologies including reverse proxy (e.g., WAF and CDN), forward proxy (secure web gateway), serverless application development</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Fluency in English (verbal and written), Experience managing technical sales within large partners and accounts, Developing champion-style relationships, Driving technical wins, Assisting with technical validation, SASE concepts and Zero Trust Networking architectures, Networking technologies including TCP, UDP, DNS, IPv4 + IPv6, BGP routing, GRE, SD-WAN, MPLS, Global Traffic Management, Internet security technologies including DDoS and DDoS mitigation, Firewalls, TLS, VPN, DLP, Detailed understanding of workflow from user to application including hybrid architectures with Azure, AWS, GCP, HTTP technologies including reverse proxy (e.g., WAF and CDN), forward proxy (secure web gateway), serverless application development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7210482</Applyto>
      <Location>Hybrid; In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07626e74-020</externalid>
      <Title>Engineering Architect, Identity (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Auth0 secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Software Architect, Identity</strong></p>
<p><strong>The Engineering Architect Team</strong></p>
<p>The Architecture team is a small group of very senior engineers reporting to our VP of Engineering Excellence, working broadly across the organisation in collaboration with Engineering, Product, and Security. We partner deeply with other Engineering teams for large projects, and provide direction and architectural guidance for smaller initiatives. We have a dual-pronged charter to “level up the tech stack and level up the people stack” via both technical contributions and partnerships/mentoring.</p>
<p>In this role, you will have the opportunity to significantly contribute to Auth0’s future technology direction. Through your experience, knowledge of industry trends, and technical abilities you will provide guidance, build proof of concepts, and deliver production software implementations that help Auth0 Engineering teams move faster by using and developing standard patterns and technologies. You will also help advance the engineering culture and help uplevel other engineers. Note that while this role involves a lot of guidance, documentation, and leadership, it also requires substantial hands-on coding and development of both applications and systems.</p>
<p><strong>What you’ll be doing</strong></p>
<ul>
<li>Collaborate with Product, Security, and Engineering teams to define and continually improve Auth0’s technology stack and architecture.</li>
</ul>
<ul>
<li>Foster and lead innovation in the IAM space, with a strong focus on Agentic Identity</li>
</ul>
<ul>
<li>Lead initiatives to enhance, scale, and evolve Auth0’s product offerings.</li>
</ul>
<ul>
<li>Embed within Engineering teams across the organisation for large projects, while providing guidance and lighter touch engagements for smaller initiatives.</li>
</ul>
<ul>
<li>Design, architect, and document large scale distributed systems.</li>
</ul>
<ul>
<li>Lead the development of complex, broadly-scoped functionality in a very large and deep set of services and components.</li>
</ul>
<ul>
<li>Teach by doing: coding, optimising, and troubleshooting Node.js and Go applications in collaboration with feature development teams.</li>
</ul>
<ul>
<li>Implement features and create consistent foundations using technologies such as AWS, Azure, Node.js, Go, MongoDB, Redis, PostgreSQL, Kubernetes.</li>
</ul>
<ul>
<li>Investigate, understand, and resolve bottlenecks in our ability to scale, use resources efficiently, and maintain a 99.99% uptime SLA.</li>
</ul>
<ul>
<li>Drive technical decision making while striving to find the right balance between factors such as simplicity, flexibility, reliability, cost, and performance.</li>
</ul>
<ul>
<li>Participate in “round table” discussions and mentor team members and engineers throughout the organisation to level up our people.</li>
</ul>
<ul>
<li>Participate in our Engineering Leadership Team with other architects, directors, and executives.</li>
</ul>
<ul>
<li>You will join our Incident Commander on-call rotation. Members of our team do periodic on-call rotation for high-severity incidents to help up-level our responses After spending time getting acquainted with our applications, systems, and processes, and getting training to</li>
</ul>
<p><strong>What you’ll bring to the role</strong></p>
<ul>
<li>10+ years of software development experience.</li>
</ul>
<ul>
<li>5+ years of experience working on cloud applications.</li>
</ul>
<ul>
<li>Experience with API-first applications using REST and/or gRPC</li>
</ul>
<ul>
<li>Passion and thorough understanding of what it takes to build and operate secure, reliable systems at scale.</li>
</ul>
<ul>
<li>Knowledge of Identity Protocols such as OAuth, OIDC and SAML.</li>
</ul>
<ul>
<li>Industry knowledge of the Authorization and Authentication spaces.</li>
</ul>
<ul>
<li>Experience in building AI Agents, and/or MCP servers applications.</li>
</ul>
<ul>
<li>Experience with security engineering and application security.</li>
</ul>
<ul>
<li>Very strong written and verbal communication skills with a demonstrated ability to adjust your communication style to the intended audience, whether communicating with senior executives, customers, engineers, or product managers.</li>
</ul>
<ul>
<li>Mastery and deep understanding of hands-on software development building distributed systems.</li>
</ul>
<ul>
<li>Experience with multi-cloud environments and container deployments, particularly Kubernetes in AWS/Azure.</li>
</ul>
<ul>
<li>Prior experience with application performance management, tracing, and performance testing tools.</li>
</ul>
<ul>
<li>Excellence at creating clarity and alignment for technical initiatives.</li>
</ul>
<ul>
<li>Great ability to build trust through collaboration with multiple teams and get consensus on a vision.</li>
</ul>
<ul>
<li>Knowledge of application security and cloud security best practices.</li>
</ul>
<p>And extra credit if you have experience in any of the following!</p>
<ul>
<li>Deep experience in Node.js (Javascript or Typescript), or Golang.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$274,000-$370,000 USD</Salaryrange>
      <Skills>API-first applications, REST, gRPC, OAuth, OIDC, SAML, Authorization, Authentication, AI Agents, MCP servers, Security engineering, Application security, Cloud security best practices, Node.js, Go, AWS, Azure, MongoDB, Redis, PostgreSQL, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7128746</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d14ace5b-870</externalid>
      <Title>Legal Operations Analyst</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We are seeking a skilled Legal Operations Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>
<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>
<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation.</p>
<p>Responsibilities:</p>
<ul>
<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>
</ul>
<ul>
<li>Handle global legal information and content removal requests, including document intake and processing.</li>
</ul>
<ul>
<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>
</ul>
<ul>
<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>
</ul>
<ul>
<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>
</ul>
<ul>
<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>
</ul>
<ul>
<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>
</ul>
<ul>
<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>
</ul>
<ul>
<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>
</ul>
<ul>
<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>
</ul>
<ul>
<li>Represent xAI in witness testimony or other external engagements.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>2+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>
</ul>
<ul>
<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>
</ul>
<ul>
<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>
</ul>
<ul>
<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>
</ul>
<ul>
<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>
</ul>
<ul>
<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>
</ul>
<p>Preferred Skills and Qualifications:</p>
<ul>
<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>
</ul>
<ul>
<li>Comfort with recording audio or video sessions for data collection.</li>
</ul>
<ul>
<li>Familiarity with AI workflows in a technical setting.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws</Skills>
      <Category>Legal</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5101856007</Applyto>
      <Location>Singapore, SG</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>12eeb115-0aa</externalid>
      <Title>Staff+ Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>
<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>
<li>Define infrastructure architecture, ensuring the hardest problems get solved , whether by you directly or by working through others</li>
<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>
<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience</li>
<li>Led complex, multi-quarter technical initiatives that span multiple teams or systems</li>
<li>Can set technical direction for a team, not just execute within it</li>
<li>Deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>
<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual Salary: $405,000-$485,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, systems language, Python, Rust, Go, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108817008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>