<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>422815ea-315</externalid>
      <Title>Global Study Associate</Title>
      <Description><![CDATA[<p>At AstraZeneca, we&#39;re seeking a Global Study Associate to join our clinical study team. As a Global Study Associate, you will support the delivery of clinical studies within BioPharmaceuticals Clinical Operations, Study Management. You will work closely with the Global Study Director, Global Study Associate Director, and/or Global Study Manager to coordinate activities, ensure quality and consistency. Your responsibilities will include initiating and leading the set-up of the electronic Trial Master File (eTMF), maintaining and closing the eTMF to ensure compliance with International Conference of Harmonisation Guidelines for Good Clinical Practice (ICH/GCP) and AstraZeneca Standard Operating Procedures (SOPs). You will also provide oversight for non-complex, non-critical path vendors, ensuring compliance with study requirements and established processes.</p>
<p>You will interact and collaborate with internal staff and external stakeholders in the collection of regulatory and other essential documents. You will contribute to electronic applications/submissions in the regulatory information management system by creating and managing clinical regulatory documents according to the requested technical standards and supporting effective publishing and delivery to regulatory authorities. You will proactively plan and collate the administrative appendices for the CSR. You will initiate, maintain, and/or support the creation of study documents, ensuring template and version compliance per study-specific requirements.</p>
<p>You will set up, populate, and accurately maintain information in AstraZeneca tracking and communication tools and support team members in the usage of these tools. You will support the set-up, maintenance, and close-out of Clinical Trial Transparency (CTT) activity in PharmaCM, coordinating with relevant stakeholders to fulfill AstraZeneca compliance and meet the regulatory authority needs. You will support the Global Study Director with tracking, reconciliation, and follow-up of the study budget/payments in relevant systems, including the creation and maintenance of purchase orders, running invoice and payment reports.</p>
<p>You will contribute to application, coordination, supply, and tracking of study materials and equipment. You will contribute to the collection of study supplies, if required, at the study close-out. You will coordinate and provide oversight of administrative tasks and logistical support throughout the conduct of the study, audits, and regulatory inspections, according to company policies and SOPs.</p>
<p>You will lead the coordination and contribute to the preparation of internal and external meetings, such as study team meetings, committee meetings, monitor meetings, Investigator meetings, and virtual meetings. You will liaise with internal and external participants and/or vendors. You will prepare, contribute, and distribute presentation material for meetings, newsletters, and websites.</p>
<p>You will work on non-drug project work in process improvements and/or leading improvement projects as discussed and agreed upon with your manager. You will perform other duties as assigned and within the scope of your role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Clinical study lifecycle, Electronic Trial Master File (eTMF), International Conference of Harmonisation Guidelines for Good Clinical Practice (ICH/GCP), AstraZeneca Standard Operating Procedures (SOPs), Regulatory information management system, Clinical regulatory documents, Study documents, AstraZeneca tracking and communication tools, Clinical Trial Transparency (CTT) activity, Study budget/payments, Purchase orders, Invoice and payment reports, Study materials and equipment, Study supplies, Administrative tasks and logistical support</Skills>
      <Category>Operations</Category>
      <Industry>Healthcare</Industry>
      <Employername>BioPharm Study Management Late</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca is a pharmaceutical company developing medicines for various diseases.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689844312</Applyto>
      <Location>Durham, North Carolina, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bd2d1b2-84f</externalid>
      <Title>Senior Machine Learning Researcher</Title>
      <Description><![CDATA[<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>
<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>
<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>
<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>
</ul>
<ul>
<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>
</ul>
<ul>
<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>
</ul>
<ul>
<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>
</ul>
<ul>
<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>
</ul>
<ul>
<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>
</ul>
<p>Required Skills &amp; Qualifications:</p>
<ul>
<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>
</ul>
<ul>
<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>
</ul>
<ul>
<li>Hands-on experience with data processing, experimentation, and exploration.</li>
</ul>
<ul>
<li>Strong programming skills in Python.</li>
</ul>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and attention to detail.</li>
</ul>
<ul>
<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Ability to work independently and collaboratively.</li>
</ul>
<p>Additional Preferred Skills &amp; Qualifications:</p>
<ul>
<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>
</ul>
<ul>
<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>
</ul>
<ul>
<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>
</ul>
<ul>
<li>Previous experience working in an Agile environment.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company focuses on artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954012324</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5510d7e7-fc5</externalid>
      <Title>Manager, Global Clinical Solutions</Title>
      <Description><![CDATA[<p>Job Title: Manager, Global Clinical Solutions</p>
<p>Introduction: Global Clinical Solutions (GCS) delivers services and technology that enable AstraZeneca&#39;s Clinical Development programs, partnering with internal teams and external stakeholders to drive operational excellence. The Manager, GCS is responsible for coordinating and leading the delivery of GCS services across projects and initiatives to meet time and quality targets.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Coordinates and delivers GCS services and coordinates life cycle management and business continuity for projects services and technology.</li>
<li>Provides expert support to user communities, including conducting training on processes, systems, and tools, facilitating information exchange, establishing proven methods, and maintaining communication with relevant parties across GCS and AZ.</li>
<li>Conducts critical analyses of processes and tools to define business usage and finds opportunities to improve efficiency/effectiveness of systems/services/processes whilst reducing business continuity risks.</li>
<li>Contributes to and/or develops business cases for continuous improvement projects.</li>
<li>Leads or manages business improvement projects according to lean principles, including planning, prioritizing, implementing and tracking delivery.</li>
<li>Serves as AZ co-Project Manager for eCOA (electronic Clinical Outcome Assessment) and DPS (Digital Patient Solutions) setup, maintenance, and closure, using project tracking tools to manage timelines, costs, risks, UAT, and stakeholder updates.</li>
<li>Coordinates input to eCOA/DPS user requirements (from Business Analyst and study stakeholders) based on the Clinical Study Protocol (CSP) and prior end-user experience; agrees system functionality with suppliers.</li>
<li>Leads operational maintenance to keep systems aligned to the latest CSP and in a validated state; authors and manages change requests with risk assessment and ensures issues are documented and addressed.</li>
<li>Establishes UAT approach for setup and maintenance; requests UAT resources; consults on test scripts, plans, reports, and change closure documentation.</li>
<li>Supports the implementation of changes to improve the way various functions and teams perform.</li>
<li>Evaluates and monitors the performance and efficiency of programs to ensure that program implementation is on target.</li>
<li>Responsible for training colleagues to use continuous improvement in the new ways of working and embed change culture.</li>
<li>Grows capabilities, applies new approaches to improve work and has positive impact on team performance creating learning opportunity for others.</li>
<li>Responsible for knowledge management of continuous improvement activities and ensuring that the knowledge is used in the selection and execution of future activities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>project management, business analysis, process improvement, lean principles, continuous improvement, change management, knowledge management, training and development, communication, team leadership, Six Sigma, Quality Management, Clinical Development, ICH GCP guidelines, global organization, complex/geographical context</Skills>
      <Category>Operations</Category>
      <Industry>Healthcare</Industry>
      <Employername>GCS Services</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>GCS Services provides services and technology that enable AstraZeneca&apos;s Clinical Development programs.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689841153</Applyto>
      <Location>Durham, North Carolina, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78270c8d-016</externalid>
      <Title>Operations Data Governance &amp; Controls Specialist</Title>
      <Description><![CDATA[<p>As an Operations Control Specialist – Data Governance &amp; Controls, you will design, implement, and support technical data governance solutions with a focus on the firm&#39;s Trader Master and related reference data domains.</p>
<p>This role requires a strong technical background in Data Management, Data Architecture, Data Lineage, Data Quality, Master Data Management (MDM), and automation within Financial Services and/or Technology.</p>
<p>You will contribute to and help lead the technical design of data governance controls, data models, and integration patterns, partnering closely with Technology and Operations teams.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build/enhance data governance frameworks, controls, standards, and workflows (policies, definitions, entitlements).</li>
<li>Create data quality rules and monitoring; automate exception detection, alerting, remediation, SLAs, and RCA.</li>
<li>Develop Python/SQL/ETL-ELT automation for checks, controls, and reporting; deliver Tableau/Power BI dashboards and KPIs.</li>
<li>Contribute to conceptual/logical/physical data modeling for Trader Master and core domains.</li>
<li>Support MDM capabilities: golden record, matching/merging, survivorship, stewardship workflows; help shape MDM strategy.</li>
<li>Implement access/entitlement governance (RBAC, row/column security) across DB/warehouse/BI with audit compliance.</li>
<li>Maintain catalog, glossary, lineage, schema history, impact analysis; manage structured change workflows.</li>
<li>Define integration patterns (batch/API/streaming) and build reconciliations/validations across systems.</li>
<li>Manage historical/temporal data (validation, backfills, remediation) supporting regulatory/reporting/analytics.</li>
<li>Produce technical documentation (designs, runbooks, data dictionaries), share knowledge, and mentor juniors.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, Information Systems, Mathematics, Finance, or related field; advanced degree (MS, MBA, or equivalent) is a plus.</li>
<li>5–8 years of experience in financial services or fintech with hands-on work in data engineering, data management, or data architecture roles; exposure to trading strategies, fund structures, and financial products strongly preferred.</li>
</ul>
<p>Technical Expertise (Required):</p>
<ul>
<li>Strong Python and SQL; experience with data warehousing + ETL/ELT.</li>
<li>Familiarity with MDM/data governance tools (e.g., Collibra, Informatica, Alation) and Tableau/Power BI.</li>
<li>Proven ability to lead delivery, solve complex data issues, and communicate with technical/non-technical stakeholders.</li>
<li>Preferred certs: DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $70,000 to $160,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$70,000 to $160,000</Salaryrange>
      <Skills>Python, SQL, ETL/ELT, Data Warehousing, Tableau/Power BI, MDM/data governance tools, Collibra, Informatica, Alation, DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Ops &amp; MO Control</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Ops &amp; MO Control provides data governance and control services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954926796</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af8ed06d-a9a</externalid>
      <Title>Forward Deployed Software Engineer - Equities Technology</Title>
      <Description><![CDATA[<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>
<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>
<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>
<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>
<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>
<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides technology solutions to the financial services industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953439247</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>87749959-700</externalid>
      <Title>Intern Data Engineering (all genders)</Title>
      <Description><![CDATA[<p>Join our Data Engineering team inside the Business Intelligence department, where you&#39;ll work with experienced engineers to build the data foundation that powers Holidu&#39;s growth.</p>
<p>As an intern, you&#39;ll get hands-on experience with real problems and have the opportunity to make a meaningful impact. You&#39;ll work on building and supporting data pipelines, digging into data quality, getting hands-on with cloud infrastructure, and exploring AI-assisted development.</p>
<p>Our team uses a range of technologies, including Redshift, Athena, DuckDB, Terraform, Docker, Jenkins, ELK, Grafana, Looker, OpsGenie, Kafka, Airbyte, and Fivetran. You&#39;ll have the chance to learn from experienced engineers and contribute to the development of our data systems.</p>
<p>In this role, you&#39;ll be part of a team that genuinely loves what they do and is passionate about building a better data foundation for Holidu. You&#39;ll have the opportunity to take responsibility from day one and develop through regular feedback.</p>
<p>We offer a fair salary, the chance to make a difference for hundreds of thousands of monthly users, and the opportunity to grow and develop through regular feedback. You&#39;ll also have access to a range of benefits, including a hybrid work policy, the chance to work from other local offices, and a corporate subscription to Urban Sports Club or a premium gym membership at a discounted rate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>intern</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Git, Airflow, dbt, Docker, Cloud platform (AWS, GCP, etc.), LLM tools, AI-assisted coding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2557398</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22b34fb3-996</externalid>
      <Title>Study Supply Manager</Title>
      <Description><![CDATA[<p>The Study Supply Manager will support Radioconjugates clinical studies, collaborating with Global Study Teams and external vendors to ensure clinical drug product needs are met across multiple programs.</p>
<p>This role builds strong relationships, acts as a key resource and escalation point, and contributes to the creation of strategy for clinical drug supply, translating strategic objectives into clear, actionable plans.</p>
<p>Working both within clinical drug supply and cross-functionally, the Study Supply Manager helps develop consistent practices, routinely identifies risks in the supply chain, recommends mitigation plans to management, and implements effective solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and maintaining strong, collaborative relationships with key stakeholders across Global Study Teams, Quality, Radiation Science and CMC, ensuring alignment on study needs and timelines.</li>
</ul>
<ul>
<li>Ensuring key project milestones are met by defining, negotiating and communicating clinical supply plan timelines to internal and external stakeholders and partners.</li>
</ul>
<ul>
<li>Serving as an escalation point to Clinical Logistics, Clinical Operations, CMC and/or CROs, resolving issues quickly and effectively.</li>
</ul>
<ul>
<li>Participating in functional initiatives, including process development and review, and system/process improvement to drive operational excellence.</li>
</ul>
<ul>
<li>Supporting the creation and maintenance of Radioconjugates pharmacy manuals to enable safe and compliant handling at clinical sites.</li>
</ul>
<ul>
<li>Overseeing ordering, tracking, delivery and receipt of clinical supplies and drug to clinical sites, ensuring availability for patients when needed.</li>
</ul>
<ul>
<li>Leading monthly network capacity planning so that supply plans are aligned with clinical programs, manufacturing schedules and quality capabilities.</li>
</ul>
<ul>
<li>Collaborating with Radio Pharm Manufacturing and Quality to manage any issues arising during shipping and/or receipt, communicating delays to the clinical team and, when applicable, managing deviations and corrective/preventive actions.</li>
</ul>
<ul>
<li>Continuously monitoring supply chain risks across multiple programs, proposing mitigation strategies and implementing agreed actions to protect study continuity.</li>
</ul>
<p>Essential skills and experience include:</p>
<ul>
<li>Bachelor’s Degree with 5+ years of experience with clinical supply chain, Phase I-III.</li>
</ul>
<ul>
<li>Knowledge of ICH GCP Guidelines and local and international regulatory requirements.</li>
</ul>
<ul>
<li>Demand planning, forecasting and analytical skills.</li>
</ul>
<ul>
<li>Advanced problem-solving ability.</li>
</ul>
<ul>
<li>Flexibility in working hours to deal with Global supply activities.</li>
</ul>
<ul>
<li>Excellent written, verbal and interpersonal communication skills and comfort working with multiple internal and external stakeholders.</li>
</ul>
<p>Desirable skills and experience include:</p>
<ul>
<li>Certification in Supply Chain and Operations Management (i.e. – CSCP, CPIM).</li>
</ul>
<ul>
<li>Prior experience in radiopharmaceutical/pharmaceutical product distribution of Class 7 Dangerous goods including import/export.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$92,582-$138,874</Salaryrange>
      <Skills>clinical supply chain, ICH GCP Guidelines, demand planning, forecasting, analytical skills, problem-solving ability, communication skills, Supply Chain and Operations Management, radiopharmaceutical/pharmaceutical product distribution</Skills>
      <Category>Operations</Category>
      <Industry>Healthcare</Industry>
      <Employername>Clinical Supply Chain</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca is a multinational pharmaceutical and biotechnology company that develops and commercialises prescription medicines and vaccines for diseases such as cancer, cardiovascular disease, diabetes, and respiratory disease.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689841091</Applyto>
      <Location>Boston, Massachusetts, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>939bf63c-00b</externalid>
      <Title>Specialist, Global Clinical Solutions (IRT Lead)</Title>
      <Description><![CDATA[<p>Role Details</p>
<p>The Specialist, Global Clinical Solutions (IRT Lead) provides global centralized services across drug projects and related activities, helping clinical development programs meet time, cost, and quality expectations across all delivery models.</p>
<p>Key Responsibilities</p>
<ul>
<li>Deliver GCS services that support clinical development activities across multiple projects and stakeholders.</li>
<li>Set up and maintain systems, tools, and data associated with projects, services, and technology in partnership with Study Teams and external partners, ensuring appropriate standards, completeness, quality, and consistency.</li>
<li>Support life cycle management and business continuity for operational processes, procedures, systems, tools, standards, procedural documentation, and training materials.</li>
<li>Provide support to user communities by conducting relevant process, system, and tool trainings, facilitating knowledge sharing, establishing best practice, and maintaining effective communication with stakeholders across GCS and AstraZeneca.</li>
<li>Perform analyses of processes and tools to define business usage and identify opportunities to improve efficiency and effectiveness of systems, methods, and processes; support the development of User Requirement Specifications and User Acceptance Tests.</li>
<li>Contribute to business cases for continuous improvement projects that enhance how we deliver clinical studies.</li>
<li>Prioritize workload effectively to achieve personal and work unit targets in a dynamic environment.</li>
<li>Participate in change initiatives that evolve our ways of working and enable better outcomes for patients.</li>
</ul>
<p>Essential Skills and Experience</p>
<ul>
<li>Bachelor&#39;s degree with 0+ years of work experience in the pharmaceutical industry or in clinical study delivery/clinical development processes.</li>
<li>Proven organizational and analytical skills, as well as proven ability to multitask.</li>
<li>Strong time management skills and task-oriented performance.</li>
<li>Previous administrative training/experience.</li>
<li>Computer proficiency.</li>
<li>Excellent knowledge of spoken and written English.</li>
<li>Strong communication skills.</li>
</ul>
<p>Desirable Skills and Experience</p>
<ul>
<li>A good understanding of the clinical study process.</li>
<li>Programming experience or programming aptitude.</li>
<li>Knowledge of pharmaceutical drug development and clinical study processes and associated government regulations, ICH GCP.</li>
<li>Shown willingness and ability to train others on study support processes and procedures.</li>
<li>Demonstrate the ability to proactively identify risks and issues as well as possible solutions.</li>
<li>GxP trained.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Bachelor&apos;s degree, Pharmaceutical industry experience, Clinical study delivery/clinical development processes experience, Organizational and analytical skills, Multitasking ability, Time management skills, Administrative training/experience, Computer proficiency, Excellent knowledge of spoken and written English, Strong communication skills, Good understanding of clinical study process, Programming experience or programming aptitude, Knowledge of pharmaceutical drug development and clinical study processes and associated government regulations, ICH GCP, Ability to train others on study support processes and procedures, Ability to proactively identify risks and issues as well as possible solutions, GxP trained</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>GCS Services</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca&apos;s Global Clinical Solutions (GCS) delivers services and technology to support clinical development across drug projectsexampleModalLabel</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689867686</Applyto>
      <Location>Durham, North Carolina, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64bb6566-575</externalid>
      <Title>Senior ‘Developer Infrastructure’ Engineer</Title>
      <Description><![CDATA[<p>The GALAXY Platform Execution &amp; Exchange Data (SPEED) Team is a core part of Millennium&#39;s technology organisation, powering the firm&#39;s lowest-latency solutions for systematic and high-frequency trading.</p>
<p>SPEED delivers the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>
<p>As a Senior Developer Infrastructure Engineer on SPEED, you will own and evolve the build and CI/CD infrastructure that underpins these mission-critical systems.</p>
<p>By designing scalable build pipelines, shared tooling, and reliable release workflows, you will directly enhance developer productivity and enable fast, safe iteration on some of the firm&#39;s most performance-sensitive code.</p>
<p>This role offers the opportunity to shape core engineering practices while contributing to platforms that are central to Millennium&#39;s trading edge.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain a highly scalable, parallel, and cached build system for a large, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Own and continually optimise CI/CD pipelines to minimise build/test times, reduce flakiness, and improve developer productivity.</li>
</ul>
<ul>
<li>Operate with an AI-first mindset across the SDLC, using automation by default to streamline build, test, and release workflows.</li>
</ul>
<ul>
<li>Integrate and operationalise AI tools (e.g., copilots, workflow automation, AI-driven analytics) to eliminate manual toil, accelerate development, and codify reusable AI-enabled patterns for the broader engineering organisation.</li>
</ul>
<ul>
<li>Design and operate containerised environments (e.g., Docker, Kubernetes) to maximise utilisation, reliability, and scalability across environments.</li>
</ul>
<ul>
<li>Implement and manage artifact storage, dependency management, and versioning strategies for large, distributed systems.</li>
</ul>
<ul>
<li>Develop and maintain shared libraries, CLIs, scripts, and internal platforms that reduce friction and enable self-service for engineers.</li>
</ul>
<ul>
<li>Build and enhance test suites and environment provisioning, leveraging AI and automation where appropriate for smarter checks, triage, and observability.</li>
</ul>
<ul>
<li>Monitor, instrument, and improve the reliability, observability, and performance of build and CI/CD systems using metrics, dashboards, and alerting.</li>
</ul>
<ul>
<li>Partner with trading and engineering teams to understand requirements, remove friction, and champion best practices for building, testing, and releasing software.</li>
</ul>
<p>Qualifications/Skills Required</p>
<ul>
<li>5+ years of software engineering or DevInfra/Platform/DevOps experience, with significant focus on building systems and CI/CD.</li>
</ul>
<ul>
<li>Strong programming skills in one or more languages (e.g., Python, Rust, Go, C++) for automation and tooling.</li>
</ul>
<ul>
<li>Hands-on experience with at least one modern build system (e.g., Bazel, Buck2).</li>
</ul>
<ul>
<li>Solid understanding of source control (Git), branching strategies, and release management.</li>
</ul>
<ul>
<li>Experience with monorepos is a plus.</li>
</ul>
<ul>
<li>Experience scaling build and test infrastructure for growing codebases and teams (parallelization, test sharding, remote execution, caching).</li>
</ul>
<ul>
<li>Experience designing or participating in processes, systems, or playbooks that leverage AI to streamline work rather than needing to add more headcount to the team.</li>
</ul>
<ul>
<li>Familiarity with containers and cloud infrastructure (Docker, Kubernetes, and major cloud providers such as AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Strong communication and collaboration skills; comfortable partnering with multiple teams and driving cross-cutting initiatives.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Rust, Go, C++, Bazel, Buck2, Git, Kubernetes, Docker, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a company that provides equities, quant strategies, and shared services technology.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954695574</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7e58f60-5fa</externalid>
      <Title>Software Engineer - Learning Engineering and Data (LEaD) Program</Title>
      <Description><![CDATA[<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>
<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>
<p>Candidate is expected to:</p>
<ul>
<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>
<li>Take part in the development and enhancement of the backend distributed system</li>
<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>2-5 years of experience working with C++, Python, or Java</li>
<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>
<li>Must be comfortable working in both Unix/Linux and Windows environments</li>
<li>Good understanding of various design patterns</li>
<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>
<li>Solid communication skills</li>
<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>
<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>
</ul>
<p>Desirable Skills/Knowledge:</p>
<ul>
<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>
<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>
<li>Hands-on experience building ML and data pipeline architectures</li>
<li>Understanding of distributed messaging systems</li>
<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>
<li>Experience with relational and non-relational database platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master&apos;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>IT LEad Program</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a large global alternative investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953879362</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6964b8e4-caf</externalid>
      <Title>Cybersecurity Engineer</Title>
      <Description><![CDATA[<p>Job Title: Cybersecurity Engineer</p>
<p>Introduction to role</p>
<p>Cybersecurity sits at the heart of our IT strategy. As we move towards ambitious objectives, we are looking for individuals who focus on innovation to maintain a sustainable risk position against an evolving threat landscape, who recognise that adversaries may include organised crime syndicates or state-sponsored attackers, and who understand attackers&#39; motivations and ways of working.</p>
<p>In this role, you will operate within AstraZeneca&#39;s global cybersecurity organisation, collaborating with and influencing multiple functions across China, India, Mexico, Sweden, the US and the UK. Ready to help defend a global enterprise where technology directly supports life-changing medicines?</p>
<p>Accountabilities</p>
<p>In this role, you will engineer cybersecurity solutions across cloud, on-premises and third-party collaboration environments, with a predominant focus on cloud and data. You will collaborate with other teams to perform, assess and evolve IT processes that intersect our cybersecurity priorities, ensuring security is embedded into how work gets done. You will map governance and compliance frameworks and their controls to technical implementation, shifting hardening processes as far left as possible in the lifecycle. You will leverage deep understanding of threats, weaknesses and vulnerabilities around cloud and data to help other areas respond promptly and effectively to contain breaches or address areas of concern. You will also contribute to continuous improvement by analysing incidents, refining standards and influencing architectural decisions that balance risk, performance and usability.</p>
<p>How will you use your expertise to raise the bar?</p>
<p>Essential Skills/Experience</p>
<ul>
<li>Minimum 10 years of experience</li>
<li>Bachelor&#39;s Degree</li>
<li>Must have broad enterprise IT experience with significant cloud and data exposure.</li>
<li>Must have in-depth understanding of security and networking protocols, cryptography, and modern authentication and authorization protocols.</li>
<li>Must have experience designing, deploying, and operating secure networks, systems, application and security architectures at scale.</li>
<li>Must have experience configuring and managing cloud security services in an AWS, Azure and GCP at organisation at scale.</li>
<li>Must have experience researching, designing, and implementing security policies, standards, and procedures, including those in cybersecurity frameworks such as MITRE ATT&amp;CK, NIST CSF, NIST SP.800- 53, and NIST SP.800-61, as well as implementing cloud security reference architectures.</li>
<li>Should have experience working in a software development and systems administration organisation, implementing DevSecOps and process automation.</li>
<li>Should have the ability to conduct post-mortem on security incidents and take post-mortem data to drive uplift in policies, procedures, standards.</li>
<li>Familiarity with CSPM, CNAPP, and Cloud EDR platforms</li>
<li>Expertise with Microsoft Defender, Sentinel and Splunk</li>
</ul>
<p>Desirable Skills/Experience</p>
<ul>
<li>Identify and articulate architectural trade-offs.</li>
<li>Embed process, governance and security into workflow and technology.</li>
<li>Design and implement software tools and services using modern programming languages.</li>
<li>Manage and lead projects delivering prioritised initiatives at challenging deadlines.</li>
<li>Exert positive influence in a matrixed organisation to drive technology evolution.</li>
<li>Drive efforts to achieve process and technology improvement at scale.</li>
</ul>
<p>The annual base pay for this position ranges from 136,044.00 - 204,066.00 USD Annual (80% - 120%). Hourly and salaried non-exempt employees will also be paid overtime pay when working qualifying overtime hours. Base pay offered may vary depending on multiple individualised factors, including market location, job-related knowledge, skills, and experience. In addition, our positions offer a short-term incentive bonus opportunity; eligibility to participate in our equity-based long-term incentive programme (salaried roles), to receive a retirement contribution (hourly roles), and commission payment eligibility (sales roles).</p>
<p>Benefits offered included a qualified retirement programme [401(k) plan]; paid vacation and holidays; paid leaves; and, health benefits including medical, prescription drug, dental, and vision coverage in accordance with the terms and conditions of the applicable plans. Additional details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an &#39;at-will position&#39; and the Company reserves the right to modify base pay (as well as any other discretionary payment or compensation programme) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.</p>
<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That&#39;s why we work, on average, a minimum of three days per week from the office. But that doesn&#39;t mean we&#39;re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.</p>
<p>AstraZeneca offers an environment where cybersecurity work has real-world impact on patients&#39; lives, not just systems and data. Here, technology experts collaborate with scientists and business teams to unlock the potential of data, analytics, AI and machine learning, constantly experimenting with new approaches while keeping critical platforms secure. There is strong investment in digital capabilities, room to explore modern tools through initiatives like hackathons, and a culture that values curiosity, coaching and continuous learning so that every day brings opportunities to grow skills and shape both personal development and the future of healthcare technology.</p>
<p>If this role matches your skills and ambitions, apply now and help protect the digital foundations that enable life-changing medicines!</p>
<p>Date Posted 17-Apr-2026 Closing Date 03-May-2026</p>
<p>Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Minimum 10 years of experience, Bachelor&apos;s Degree, Broad enterprise IT experience with significant cloud and data exposure, In-depth understanding of security and networking protocols, cryptography, and modern authentication and authorization protocols, Experience designing, deploying, and operating secure networks, systems, application and security architectures at scale, Experience configuring and managing cloud security services in an AWS, Azure and GCP at organisation at scale, Experience researching, designing, and implementing security policies, standards, and procedures, including those in cybersecurity frameworks such as MITRE ATT&amp;CK, NIST CSF, NIST SP.800- 53, and NIST SP.800-61, as well as implementing cloud security reference architectures, Experience working in a software development and systems administration organisation, implementing DevSecOps and process automation, Ability to conduct post-mortem on security incidents and take post-mortem data to drive uplift in policies, procedures, standards, Familiarity with CSPM, CNAPP, and Cloud EDR platforms, Expertise with Microsoft Defender, Sentinel and Splunk</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Cyber Security Engineering Cloud/Data</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca is a multinational pharmaceutical and biotechnology company that develops and commercializes prescription medicines and vaccines for diseases across various therapeutic areas.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689899183</Applyto>
      <Location>Gaithersburg, Maryland, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>867e3558-9a7</externalid>
      <Title>Team Lead, Java Engineer - Equities Trading Technologies</Title>
      <Description><![CDATA[<p>We are seeking a Team Lead to maintain and enhance our mission-critical, multi-asset trading platform that is used firm-wide daily. This individual will own the existing Java Swing code base, while also playing a pivotal role in designing the next-generation HTML5 trading UI.</p>
<p>The ideal candidate should have a proven track record in developing and maintaining Java-based front-end applications in the finance sector. Exceptional team collaboration skills and the ability to work effectively with colleagues across global time zones are crucial.</p>
<p>Millennium strongly prioritizes our synergistic culture, which revolves around teamwork and low egos. You should possess the ability to work in a fast-paced environment both collaboratively and individually while managing multiple projects simultaneously.</p>
<p>The successful individual will have a strong sense of urgency, emotional intelligence, and prioritize a high-caliber end-user experience.</p>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in computer science or comparable</li>
<li>7+ years of professional experience with Core Java and Java Swing, electronic trading systems and/or trader workstations environment strongly preferred.</li>
<li>5+ years of experience working with HTML, JavaScript, CSS, and JQuery</li>
<li>Deep understanding of multithreading and distributed systems within a high performance, latency-sensitive environment</li>
<li>Strong knowledge of unit testing frameworks and continuous test-driven development practices</li>
<li>Enterprise level experience with design patterns such as MVC, MV, MVP</li>
<li>Enterprise level experience with RESTful web services</li>
<li>Previous experience liaising with non-technology stakeholders, polished and proactive communication skills</li>
</ul>
<p>Beneficial/Ideal Technology Experience:</p>
<ul>
<li>EXT-JS, AngularJS, AJAX, JSON experience is very beneficial</li>
<li>Knowledge of equities, futures, options and other asset classes is preferred</li>
<li>Enterprise level experience with OMS architecture and design is preferred</li>
<li>Experience with messaging middleware, Solace preferred</li>
<li>Experience with relational and NoSQL databases. MongoDB preferred</li>
<li>Experience working with financial data, including reference data, market data, order/execution and positions data.</li>
<li>Experience working with Cloud: AWS (preferred), GCP or Azure</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Core Java, Java Swing, HTML, JavaScript, CSS, JQuery, Multithreading, Distributed systems, Unit testing frameworks, Continuous test-driven development practices, MVC, MV, MVP, RESTful web services, EXT-JS, AngularJS, AJAX, JSON, Equities, Futures, Options, OMS architecture and design, Messaging middleware, Solace, Relational databases, NoSQL databases, MongoDB, Financial data, Cloud, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides mission-critical trading platforms for the finance sector.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955412056</Applyto>
      <Location>Miami, Florida, United States of America · New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cf6d98aa-97b</externalid>
      <Title>Clinical Study Administrator - Contracts and Budgets</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>We are seeking a Clinical Study Administrator - Contracts and Budgets to join our team at US SM&amp;M. As a Clinical Study Administrator, you will be responsible for the coordination and administration of clinical studies from start-up through execution and close-out. You will act as the main local administrative contact and work closely with CRAs and/or LSAD for the duration of assigned studies.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Coordinate administrative tasks during the study process, audits and regulatory inspections in line with company policies and SOPs.</li>
<li>Support the collection, preparation, review and tracking of documents required for the application process.</li>
<li>Support the Study Start-up team with timely submissions to Ethics Committees/Institutional Review Boards (EC/IRB) and, where applicable, Regulatory Authorities.</li>
<li>Take operational responsibility for correct set-up and ongoing maintenance of the local electronic Trial Master File (eTMF) and Investigator Study File (ISF), ensuring document tracking in accordance with ICH-GCP and local requirements to maintain inspection readiness.</li>
<li>Ensure all study documents are prepared for final archiving and support CRAs with ISF close-out activities.</li>
<li>Contribute to the production and maintenance of study documents, ensuring compliance with required templates and versions.</li>
<li>Manage clinical-regulatory documents in the Global Regulatory management system, as required.</li>
<li>Manage clinical-regulatory documents for electronic applications and submissions, complying with Submission Ready Standards (SRS) to support efficient publishing and delivery to regulatory authorities, where applicable.</li>
<li>Serve as primary point of contact for legal negotiations related to confidentiality agreements and amendments.</li>
<li>Process study-level and site-level amendments.</li>
<li>Prepare and/or support site-level contract preparation, except where a specific local role is assigned.</li>
<li>Prepare, support and perform payments to Health Care Organisations (HCO) and Health Care Professionals (HCP) in accordance with local regulations.</li>
<li>Set up, populate and accurately maintain information in AstraZeneca tracking and communication tools (e.g. Clinical Trial Management System [CTMS]) and support others in the use of these systems, except in countries with a designated system administrator.</li>
<li>Manage and contribute to the coordination and tracking of study materials and equipment.</li>
<li>Interface with Data Management Centres and/or Global Clinical Solution representatives to support delivery of study-related documents and materials.</li>
<li>Interface with investigators, external service providers and Clinical Research Associates (CRAs) to facilitate effective document collection and study delivery.</li>
<li>Lead practical arrangements for internal and external meetings (e.g. study team meetings, monitors&#39; meetings, investigators&#39; meetings), liaising with internal and external participants and vendors in line with applicable international and local codes.</li>
<li>Prepare, contribute to and distribute material for meetings, newsletters and web content, in alignment with LST and global stakeholders.</li>
<li>Perform document layout and language checks, as well as copying and distribution.</li>
<li>Provide support for local translation and English language checks, as required.</li>
<li>Handle printing and distribution of documents (e.g. letters, meeting minutes) and manage and archive study- and country-related emails.</li>
<li>Ensure compliance with AstraZeneca&#39;s Code of Ethics, policies and procedures, including those related to people, finance, technology, security and Safety, Health and Environment (SHE).</li>
<li>Adhere to all relevant local, national and regional legislation.</li>
<li>Carry out additional country-specific tasks in accordance with local organisational needs, when assigned.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree aligned to the knowledge and skills required for the role.</li>
<li>0+ years of experience required.</li>
<li>Relevant knowledge of the drug development process, international guidelines (ICH-GCP) and applicable country regulations.</li>
<li>Personal effectiveness and strong self-accountability.</li>
<li>Learning agility.</li>
<li>Financial, technology and process competency.</li>
<li>Active listening and fluency in written and spoken business-level English.</li>
<li>High integrity and ethical standards.</li>
<li>Ability to work effectively as part of a team in both in-person and virtual settings; demonstrates cultural awareness.</li>
<li>Ability to identify and champion more efficient delivery of quality clinical trials with optimised cost and time.</li>
<li>Ability to travel nationally and internationally, as required.</li>
<li>Valid driving licence, if required by country of employment.</li>
<li>Strong communication and teamwork skills, including collaboration, business partnering and impactful site conversations.</li>
<li>Effective, risk-based thinking, including planning and alignment, problem solving, critical thinking and decision making.</li>
<li>Clinical study operations (GCP) and quality management, including Good Documentation Practice (GDP).</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are interested in this opportunity, please submit your application through our website.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Clinical study operations (GCP), Quality management, Good Documentation Practice (GDP), Personal effectiveness, Self-accountability, Learning agility, Financial competence, Technology competence, Process competence, Active listening, Business-level English, High integrity, Ethical standards, Cultural awareness, Risk-based thinking, Planning and alignment, Problem solving, Critical thinking, Decision making</Skills>
      <Category>Operations</Category>
      <Industry>Healthcare</Industry>
      <Employername>US SM&amp;M</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>US SM&amp;M is a company that provides clinical trial management services.
It operates globally with a large team of professionals.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689867786</Applyto>
      <Location>Wilmington, Delaware, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b7f1d2fa-e89</externalid>
      <Title>Manager, Digital Trust &amp; Security (all genders)</Title>
      <Description><![CDATA[<p>About tonies:</p>
<p>Tonies is the world&#39;s leading interactive audio platform for children, with over 10 million Tonieboxes and 125 million Tonies sold globally. Our intuitive, screen-free system empowers children to learn and play independently in a safe and engaging way.</p>
<p>As Manager Digital Trust &amp; Security, you will lead the strategic expansion of our security service offerings. You will bridge the gap between technical architecture and business-centric consulting, ensuring our digital infrastructure remains resilient while fostering consumer trust across our global operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Strategic Governance &amp; Architecture: Define the global security vision and design scalable architectures to mitigate complex cyber threats across our media and hardware footprints.</li>
<li>Trust Infrastructure Management: Architect and manage the security technology stack, ensuring our global IT assets and infrastructures remain resilient against emerging threats.</li>
<li>Proactive Risk &amp; Compliance: Lead comprehensive vulnerability assessments and recommend mitigation strategies that align with industry standards such as ISO 27001.</li>
<li>Operational Excellence: Oversee incident response lifecycles, from real-time resolution to deep-dive post-incident analysis and reporting.</li>
<li>Security Culture &amp; Training: Develop and execute global training programs to embed security awareness into the company DNA, ensuring compliance across all departments.</li>
<li>Cross-Functional Leadership: Partner with Enterprise Architecture and Application Management to integrate security-by-design into every product and internal service.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>Expertise: Several years of leadership in security or service management within the technology or consumer electronics sector.</li>
<li>Technical Breadth: Deep understanding of security frameworks, cloud security (AWS/GCP), and modern monitoring platforms.</li>
<li>Strategic Mindset: Proven ability to translate complex security risks into actionable business insights for diverse stakeholders.</li>
<li>Lateral Leadership: A collaborative leader capable of managing cross-functional initiatives in a fast-paced, global environment.</li>
<li>Communication: Professional fluency in English and German is essential for our global coordination.</li>
<li>Mandatory: Demonstrable expertise in at least two security domains, backed by relevant professional certifications.</li>
<li>Preferred: Advanced credentials in Cloud Security or specialized standards like ISO 27001.</li>
</ul>
<p>Why tonies?</p>
<p>Our benefits vary by location. The following benefits apply in Germany:</p>
<ul>
<li>Global Teamwork: We collaborate across departmental and country borders on our vision to bring the Toniebox into every child&#39;s room in the world.</li>
<li>Come as you are: This applies not only to the dress code but also to everything else. Because only where you truly feel comfortable can you give your best.</li>
<li>Mobility: Choose the option that suits you best - a Deutschlandticket (public transport ticket) for unlimited mobility, a monthly contribution for an office parking space, a leasing bicycle, or a remote work subsidy.</li>
<li>Enhanced Security: Benefit from subsidies for company pension plans, occupational pension schemes, and occupational disability insurance.</li>
<li>Rest &amp; Time Off: Enjoy 30 days of paid annual leave as well as three additional days off such as Rosenmontag, Christmas Eve, and New Year&#39;s Eve. After one year of employment, you can also use up to 10 &#39;toniecation days&#39; (unpaid leave days).</li>
<li>Continuous Learning: Benefit from our internal and external training opportunities as well as an individual learning budget to continuously expand your knowledge.</li>
<li>Language Learning &amp; Relaxation: Improve your communication skills with the language learning app Babbel and find relaxation through our access to the meditation app Calm.</li>
<li>Discounts: Benefit from attractive discounts on our entire range of tonies products.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>security frameworks, cloud security (AWS/GCP), modern monitoring platforms, vulnerability assessments, ISO 27001</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tonies GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/tonies.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Tonies is a global company producing interactive audio platforms for children, with over 10 million Tonieboxes and 125 million Tonies sold worldwide.</Employerdescription>
      <Employerwebsite>https://tonies.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://tonies.jobs.personio.com/job/2602344</Applyto>
      <Location>Düsseldorf · London · Paris · Berlin</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8d34aff-3e5</externalid>
      <Title>Applied AI Engineer, Global Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re hiring Applied AI Engineers to build custom end-to-end AI applications for our public sector clients using the latest developments in the field of AI.</p>
<p>You will partner with public sector clients to deeply understand their challenges and define AI-driven solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and deploying end-to-end AI applications into production leveraging latest developments from the biggest AI labs, and open source models</li>
<li>Collaborating with cross-functional teams, including data annotation specialists, to create high-quality training datasets</li>
<li>Designing and maintaining robust evaluation frameworks to ensure the reliability and effectiveness of AI models</li>
<li>Participating in customer engagements, including occasional travel (approximately two weeks per quarter)</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>A strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience)</li>
<li>7+ years of post-graduation engineering experience, with demonstrated proficiency in languages such as Python, TypeScript/JavaScript, Java, or C++</li>
<li>2+ years of experience applying AI/ML in production environments, such as deploying deep learning solutions, building generative/agentic AI applications or setting up evaluations pipelines</li>
<li>Familiarity with cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure)</li>
<li>Strong problem-solving skills, with a data-driven approach to iterating on machine learning models and datasets</li>
<li>Excellent written and verbal communication skills to collaborate effectively in a cross-functional environment</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience working at a startup, particularly as founding engineer</li>
<li>Experience building and deploying large-scale AI solutions</li>
<li>Strong written and verbal communication skills to operate in a cross-functional team environment</li>
<li>Proficiency in Arabic (if focused on language models)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/JavaScript, Java, C++, Cloud-based machine learning tools and platforms (e.g. AWS, GCP, Azure), Experience working at a startup, particularly as founding engineer, Experience building and deploying large-scale AI solutions, Strong written and verbal communication skills to operate in a cross-functional team environment, Proficiency in Arabic (if focused on language models)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4413992005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d67909d-97e</externalid>
      <Title>Senior Machine Learning Engineer - Model Evaluations, Public Sector</Title>
      <Description><![CDATA[<p>The Public Sector ML team at Scale deploys advanced AI systems, including LLMs, agentic models, and multimodal pipelines, into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints.</p>
<p>As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and maintaining automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations.</li>
</ul>
<ul>
<li>Designing test datasets and benchmarks to measure generalization, bias, explainability, and failure modes.</li>
</ul>
<ul>
<li>Building evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing.</li>
</ul>
<ul>
<li>Conducting comparative analyses of model architectures, training procedures, and evaluation outcomes.</li>
</ul>
<ul>
<li>Implementing tools for continuous monitoring, regression testing, and quality assurance for ML systems.</li>
</ul>
<ul>
<li>Designing and executing stress tests and red-teaming workflows to uncover vulnerabilities and edge cases.</li>
</ul>
<ul>
<li>Collaborating with operations teams and subject matter experts to produce high-quality evaluation datasets.</li>
</ul>
<p>This role requires an active security clearance or the ability to obtain a security clearance.</p>
<p>Ideal candidates will have experience in computer vision, deep learning, reinforcement learning, or NLP in production settings, strong programming skills in Python, and background in algorithms, data structures, and object-oriented programming.</p>
<p>Nice to have qualifications include graduate degree in CS, ML, or AI, cloud experience (AWS, GCP), and model deployment experience.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$240,450-$300,300 USD (San Francisco, New York, Seattle) $216,300-$269,850 USD (Washington DC, Texas, Colorado, Hawaii)</Salaryrange>
      <Skills>Python, TensorFlow, PyTorch, Computer Vision, Deep Learning, Reinforcement Learning, NLP, Algorithms, Data Structures, Object-Oriented Programming, Graduate Degree in CS, ML, or AI, Cloud Experience (AWS, GCP), Model Deployment Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4631848005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3cc878fa-5d1</externalid>
      <Title>Infrastructure Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our core infrastructure in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers</li>
<li>Implementing custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Collaborating with platform, product teams and our customers directly to develop and implement innovative infrastructure that scales to meet evolving needs</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
<li>Excellent communication skills with the ability to explain technical concepts to both technical and non-technical audiences</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, GCP, Azure, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4665557005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61e346b2-915</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>
<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>740da2af-174</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p>We are seeking a Senior Security Engineer with a specialty in Detection and Incident Response to join our Security Engineering team. This role sits at the intersection of security operations and software engineering, requiring you to investigate incidents and build the systems that detect, contain, and prevent them.</p>
<p>You will design and ship high-precision detections across cloud services and enterprise SaaS, develop automation that shortens response timelines, and mature the telemetry pipelines that make it all possible. Your ability to write production-quality code is just as important as your ability to triage an alert.</p>
<p>Responsibilities:</p>
<ul>
<li>Engineer, test, and deploy detection logic across cloud and enterprise environments, treating detections as software with version control, peer review, and measurable performance.</li>
</ul>
<ul>
<li>Build and maintain incident response automation, runbooks, and tooling that reduce containment timelines without sacrificing developer velocity.</li>
</ul>
<ul>
<li>Mature telemetry pipelines through improved schema design, normalization, enrichment, and quality checks that reduce false positives and increase signal fidelity.</li>
</ul>
<ul>
<li>Perform digital incident investigations to identify and contain potential security breaches.</li>
</ul>
<ul>
<li>Conduct digital forensics and malware analysis to understand attack vectors and adversary methodologies.</li>
</ul>
<ul>
<li>Integrate alerting with messaging and ticketing systems to enable fast, traceable response workflows.</li>
</ul>
<ul>
<li>Partner cross-functionally with IT, security, and engineering teams to harden identity and access patterns, close logging and forensics gaps, and implement maintainable guardrails that scale with the organisation.</li>
</ul>
<ul>
<li>Utilize threat intelligence platforms to improve hunting, detection, and response workflows.</li>
</ul>
<ul>
<li>Clearly explain the significance and impact of incidents, providing actionable recommendations to both technical and non-technical stakeholders.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>5+ years of experience in Detection Engineering, Incident Response, or Security Operations, with a strong emphasis on building and shipping security tooling and automation.</li>
</ul>
<ul>
<li>Proficiency in at least one programming language (e.g., Python, Go) and comfort writing production-grade code , not just scripts.</li>
</ul>
<ul>
<li>Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments.</li>
</ul>
<ul>
<li>Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically.</li>
</ul>
<ul>
<li>Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs.</li>
</ul>
<ul>
<li>Familiarity with digital forensics tools and malware analysis techniques.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate.</li>
</ul>
<ul>
<li>Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows.</li>
</ul>
<ul>
<li>Strong communication skills, with the ability to translate complex security findings into clear business impact.</li>
</ul>
<ul>
<li>Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>Detection Engineering, Incident Response, Security Operations, Cloud Services, Enterprise SaaS, Automation, Telemetry Pipelines, Digital Forensics, Malware Analysis, Threat Intelligence Platforms, SIEM, EDR, SOAR, Cloud-Native Environments, Programming Languages, Python, Go, Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments, Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically, Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs, Familiarity with digital forensics tools and malware analysis techniques, Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate, Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows, Strong communication skills, with the ability to translate complex security findings into clear business impact, Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4684073005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a400e696-2d2</externalid>
      <Title>Staff Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. As a Staff Software Engineer, you will own large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
<li>Being able, and willing, to multi-task and learn new technologies quickly</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>7+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$248,400-$310,500 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops and provides AI systems for critical decision-making. It offers products and technologies for building, deploying, and overseeing AI applications.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4569678005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd7327f8-fcf</externalid>
      <Title>Staff Software Engineer, Full-Stack - Enterprise Gen AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a frontend-focused full-stack engineer to help build AI-powered applications that redefine enterprise workflows and push the boundaries of interactive AI. As a staff software engineer, you&#39;ll work on a mix of cutting-edge customer-facing AI applications and internal SaaS products. Our engineering team powers projects like TIME&#39;s Person of the Year AI experience, where our AI technology helped shape one of the most iconic features in media. You&#39;ll also contribute to Scale&#39;s GenAI Platform (SGP), a powerful system that enables businesses to build and deploy AI agents at scale.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and enhancing user-facing AI applications for major enterprise customers, including high-profile media and Fortune 500 companies</li>
<li>Developing and refining features for Scale&#39;s GenAI Platform, empowering businesses to build, deploy, and manage AI-driven agents</li>
<li>Designing, building, and optimizing polished, high-performance UIs using Next.js, React, TypeScript, and Tailwind</li>
<li>Working closely with product managers, designers, and AI/ML teams to create seamless, intuitive, and impactful user experiences</li>
<li>Integrating frontend applications with backend services, working with APIs, authentication systems, and cloud-based infrastructure</li>
</ul>
<p>In this role, you&#39;ll have the opportunity to shape the future of AI-powered user experiences, working on projects that impact millions of users while developing tools that empower businesses to deploy AI at scale.</p>
<p>The base salary range for this full-time position in our hub locations of San Francisco, New York, or Seattle is $248,400,$310,500 USD. Compensation packages at Scale include base salary, equity, and benefits. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,400—$310,500 USD</Salaryrange>
      <Skills>Next.js, React, TypeScript, Tailwind, AI/ML, APIs, Authentication systems, Cloud-based infrastructure, FastAPI, PostgreSQL, GraphQL, AWS, Azure, GCP, Data-rich web platforms, Interactive AI applications, Agent-based systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4529529005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd00b53a-6fa</externalid>
      <Title>Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will be responsible for owning large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning large new areas within our product</li>
<li>Working across backend, frontend, and interacting with LLMs and ML models</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4513943005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6cae1ee9-b93</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>
<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>
<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Troubleshoot and resolve complex customer issues related to Databricks platform</li>
<li>Provide best practices support for custom-built solutions developed by Databricks customers</li>
<li>Deliver suggestions for improving performance in customer-specific environments</li>
<li>Assist with issues around third-party integrations with Databricks environment</li>
<li>Demonstrate and coordinate with engineering and escalation teams to achieve resolution of customer issues and requests</li>
<li>Participate in the creation and maintenance of company documentation and knowledge articles</li>
<li>Be a true proponent of customer advocacy</li>
<li>Strengthen your AWS/Azure and Databricks platform expertise through learning and internal training programs</li>
<li>Participate in weekend and weekday on call rotation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years experience designing, building, testing, and maintaining Python/Java/Scala based applications</li>
<li>Expert level knowledge in python is desired</li>
<li>Strong experience with SQL-based database is required</li>
<li>Linux/Unix administration skills</li>
<li>Hands-on experience with AWS, Azure or GCP</li>
<li>Experience with &quot;Distributed Big Data Computing&quot; environment</li>
<li>Technical degree or the equivalent experience</li>
<li>Written and spoken proficiency in both Japanese and English</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, SQL, Linux/Unix, AWS, Azure, GCP, Distributed Big Data Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8488552002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d4c3fc5-2ed</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fb1f459e-b3a</externalid>
      <Title>Machine Learning Research Scientist / Engineer, Reasoning</Title>
      <Description><![CDATA[<p>About Scale</p>
<p>At Scale, our mission is to accelerate the development of AI applications. We&#39;re looking for a Machine Learning Research Scientist/Engineer to join our team and help us shape the future of AI.</p>
<p>This role operates at the forefront of AI research and real-world implementation, with a strong focus on reasoning within large language models (LLMs). You will study the data types critical for advancing LLM-based agents, including browser and software engineering (SWE) agents. You will play a key role in shaping Scale&#39;s data strategy by identifying the most effective data sources and methodologies for improving LLM reasoning.</p>
<p>Success in this role requires a deep understanding of LLMs, planning algorithms, and novel approaches to agentic reasoning, as well as creativity in tackling challenges related to data generation, model interaction, and evaluation. You will contribute to impactful research on language model reasoning, collaborate with external researchers, and work closely with engineering teams to bring state-of-the-art advancements into scalable, real-world solutions.</p>
<p>Responsibilities</p>
<ul>
<li>Study the data types critical for advancing LLM-based agents, including browser and software engineering (SWE) agents</li>
<li>Shape Scale&#39;s data strategy by identifying the most effective data sources and methodologies for improving LLM reasoning</li>
<li>Contribute to impactful research on language model reasoning</li>
<li>Collaborate with external researchers</li>
<li>Work closely with engineering teams to bring state-of-the-art advancements into scalable, real-world solutions</li>
</ul>
<p>Requirements</p>
<ul>
<li>Practical experience working with LLMs, with proficiency in frameworks like PyTorch, JAX, or TensorFlow</li>
<li>A track record of published research in top ML and NLP venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICML, ICLR, CoLLM, etc.)</li>
<li>At least three years of experience solving complex ML challenges, either in a research setting or product development, particularly in areas related to LLM capabilities and reasoning</li>
<li>Strong written and verbal communication skills, along with the ability to work effectively across teams</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Hands-on experience fine-tuning open-source LLMs or leading bespoke LLM fine-tuning projects using PyTorch/JAX</li>
<li>Research and practical experience in building applications and evaluations related to LLM-based agents, including tool-use, text-to-SQL, browser agents, coding agents, and GUI agents</li>
<li>Experience with agent frameworks such as OpenHands, Swarm, LangGraph, or similar</li>
<li>Familiarity with advanced agentic reasoning techniques such as STaR and PLANSEARCH</li>
<li>Proficiency in cloud-based ML development, with experience in AWS or GCP environments</li>
</ul>
<p>Benefits</p>
<ul>
<li>Comprehensive health, dental and vision coverage</li>
<li>Retirement benefits</li>
<li>A learning and development stipend</li>
<li>Generous PTO</li>
<li>Commuter stipend</li>
</ul>
<p>Salary Range</p>
<p>$252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>PyTorch, JAX, TensorFlow, Large Language Models (LLMs), Planning Algorithms, Agentic Reasoning, Data Generation, Model Interaction, Evaluation, Agent Frameworks, Cloud-Based ML Development, AWS, GCP, STaR, PLANSEARCH</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI is a leading AI data foundry that provides high-quality data to drive progress toward Artificial General Intelligence (AGI). It was founded 8 years ago and has since become a major player in the AI industry.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4605596005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88ec8f26-4c9</externalid>
      <Title>Senior IT Systems Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic thinker and proven problem-solver with deep expertise in modern IT ecosystems. As a Sr. IT Systems Engineer, you&#39;ll lead the design, implementation, administration, and optimization of core SaaS platforms, including Okta, Google Workspace, Slack, Atlassian, and other IT tools. You&#39;ll own end-to-end support, monitoring, troubleshooting, and performance tuning of applications, systems, and their complex interconnections,ensuring high availability, security, and seamless user experience.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and implementing SaaS platforms and IT tools</li>
<li>Providing technical guidance to support business expansion, system scalability, and infrastructure maturity</li>
<li>Identifying gaps, risks, and opportunities in the environment and leading initiatives to enhance security posture, operational efficiency, and resilience</li>
<li>Evaluating emerging technologies, IAM trends, and automation platforms and developing business cases and adoption recommendations</li>
<li>Mentoring junior engineers and collaborating with cross-functional teams to align IT capabilities with organizational goals</li>
</ul>
<p>Basic qualifications include 8+ years of hands-on experience administering and optimizing a broad portfolio of SaaS applications in a hybrid and high-growth environment, with advanced proficiency in our core stack: Okta (including Advanced Server Access &amp; Workflows), Google Workspace, Slack Enterprise, Atlassian, etc.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$184,000 - $276,000 USD</Salaryrange>
      <Skills>Okta, Google Workspace, Slack, Atlassian, IAM principles and protocols, APIs for custom integrations, Scripting and automation for monitoring, alerting, and operational efficiency, Azure, AWS, GCP cloud platforms, n8n, Okta Workflows, Workato, Zapier, BetterCloud, custom integrations</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5071895007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f2bc1be2-478</externalid>
      <Title>Senior Technical Solutions Engineer, Platform</Title>
      <Description><![CDATA[<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>
<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>
<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>
<p>You will report to the Senior Manager of Technical Solutions.</p>
<p>Key responsibilities include: Troubleshooting and resolving complex customer issues related to Databricks platform Providing best practices support for custom-built solutions developed by Databricks customers Delivering suggestions for improving performance in customer-specific environments Assisting with issues around third-party integrations with Databricks environment Demonstrating and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests Participating in the creation and maintenance of company documentation and knowledge articles Being a true proponent of customer advocacy Strengthening your AWS/Azure and Databricks platform expertise through learning and internal training programs Participating in weekend and weekday on call rotation</p>
<p>Requirements include: Minimum 4 years experience designing, building, testing, and maintaining Python/Java/Scala based applications Expert level knowledge in python is desired Solid experience with SQL-based database is required Linux/Unix administration skills Hands-on experience with AWS, Azure or GCP Candidate must possess excellent English written and oral communication skills Experience with &quot;Distributed Big Data Computing&quot; environment Technical degree or the equivalent experience</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, SQL, Linux/Unix administration, AWS, Azure, GCP, Distributed Big Data Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data-driven organisations.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7902994002</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04c1ff49-2d1</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8396801002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859cb1cf-b9c</externalid>
      <Title>Senior AI Infrastructure Engineer, Model Serving Platform</Title>
      <Description><![CDATA[<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>
<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>
<li>Build an internal platform to empower LLM capability discovery.</li>
<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>
<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>
<li>Develop monitoring and observability solutions to ensure system health and performance.</li>
<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>5+ years of experience building large-scale, high-performance backend systems.</li>
<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>
<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>
<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>
<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>
<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>
<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4520320005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f86a39bf-9a5</externalid>
      <Title>Solutions Architect - Digital Native Business, Strategic</Title>
      <Description><![CDATA[<p>As a Solutions Architect on the Digital Natives team, you will work with leading data engineering, data science, and ML teams to push the boundaries of what big data architectures are capable of.</p>
<p>Reporting to the Field Engineering Manager, you will collaborate with strategic customers, product teams, and the broader customer-facing team to develop architectures and solutions using our platform and APIs.</p>
<p>You will guide customers through the competitive landscape, best practices, and implementation; and develop technical champions along the way.</p>
<p>We are looking for high technical aptitude individuals with a deep sense of ownership and a desire to help customers ship solutions at production scale.</p>
<p>Ideal candidates are deeply curious, capable of operating with confidence in ambiguous situations, and are extremely adaptable.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems.</li>
</ul>
<ul>
<li>Drive technical discovery and solution design, focusing on winning competitive deals and accelerating time-to-value in strategic accounts.</li>
</ul>
<ul>
<li>Continuously research &amp; learn new technologies and their implementations on Databricks.</li>
</ul>
<ul>
<li>Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science, and machine learning, and SQL analysis workflows.</li>
</ul>
<ul>
<li>As well as validating integrations with cloud services, home-grown tools, and other 3rd party applications.</li>
</ul>
<ul>
<li>Collaborate with your fellow Solutions Architects, using your skills to support each other and our customers.</li>
</ul>
<ul>
<li>Become an expert in, promote, and recruit contributors for Databricks-inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community.</li>
</ul>
<ul>
<li>Work closely with account executives to create and execute account penetration strategies, focusing on winning technical decision-makers and building new customer champions.</li>
</ul>
<ul>
<li>Build trusted advisor relationships with senior and executive stakeholders by articulating the business value of Databricks in clear, outcomes-driven terms.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role.</li>
</ul>
<ul>
<li>Experience building distributed data systems.</li>
</ul>
<ul>
<li>Comfortable programming in, and debugging, Python and SQL.</li>
</ul>
<ul>
<li>Have built solutions with public cloud providers such as AWS, Azure, or GCP.</li>
</ul>
<ul>
<li>Expertise in one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, pytorch, Tensorflow)</li>
</ul>
<ul>
<li>Strong executive presence with the ability to influence C/VP-level stakeholders and align technical solutions to strategic business priorities.</li>
</ul>
<ul>
<li>Available to travel to customers in your region.</li>
</ul>
<ul>
<li>[Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research).</li>
</ul>
<ul>
<li>Nice to have: Databricks Certification.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Data Engineering technologies, Data Science and Machine Learning technologies, Python, SQL, Cloud providers (AWS, Azure, GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8434467002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477935cf-ac5</externalid>
      <Title>Senior Strategic Partner Manager, Solutions</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Strategic Partner Manager, Solutions to join our team. As a key member of our Partner organization, you will play a critical role in building a global Solutions Partner Program that equips ZoomInfo&#39;s top implementation partners to deliver successful project outcomes and ensure ongoing customer adoption.</p>
<p>Your primary responsibilities will include designing and managing the global Solutions Partner Program and methodology, working with the partner team, delivery and account teams to ensure the proper program elements, resources, and processes are in place to support solutions partner&#39;s success, providing full program strategy, project management and timely updates on key solutions partner success initiatives, and being the trusted advisor for ZoomInfo solutions partners.</p>
<p>You will also work collaboratively with Partners and internal ZoomInfo delivery and technical experts to develop repeatable frameworks built from successful customer deployments, collect qualitative and quantitative data points to measure and report on individual partner performance based upon key metrics (KPIs) to ensure a high standard of implementation quality from our top partners, and play an active role in contributing to the evolution of ZoomInfo’s overall partner program and strategy.</p>
<p>Additionally, you will architect and manage partner business planning, QBRs, assessments, etc., own and manage partner interactions with ZoomInfo Team (Marketing, Product, Pre-Sales, Sales, Services, Enablement, Customer Success, Partner Operations, and Executive Leadership), handle administrative functions related to Partner Account and ensure internal tools are updated and sales hygiene is maintained, and support Partners and internal stakeholders’ ad-hoc requests and jump in where needed.</p>
<p>Core systems and tools you may be working with include Salesforce, Jira, Confluence, GSuite, Netsuite, Snowflake, GCP, AWS, plus multiple other peripheral software tools in these ecosystems.</p>
<p>Requirements include 5-8 years of experience working with and handling solutions partners, confirmed track records of sales over-performance, existing SI partner relationships and network, strategic business and marketing planning capabilities, excellent interpersonal skills and a confirmed capacity to build positive relationships and close business with partners, proven ability to work cross-functionally, and self-motivation, strong self-management skills, and leadership qualities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$105,350-$165,550 USD</Salaryrange>
      <Skills>Sales, Partner Management, Program Management, Project Management, Strategic Planning, Business Development, Marketing, Product, Pre-Sales, Services, Enablement, Customer Success, Partner Operations, Executive Leadership, Salesforce, Jira, Confluence, GSuite, Netsuite, Snowflake, GCP, AWS</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a provider of go-to-market intelligence solutions, with a platform that offers best-in-class technology paired with unrivaled data coverage, accuracy, and depth of contacts, companies, and opportunities.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8441280002</Applyto>
      <Location>Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b244f27-9fd</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>
<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>
<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>
<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>
<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>
<p>Travel to customers 20% of the time.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461258002</Applyto>
      <Location>Raleigh, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4daeb1d2-f04</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our team in Vancouver. As a fullstack software engineer, you will work with your team and product management to make insights from data simple. You&#39;ll set the foundation for how we build robust, scalable, and delightful products.</p>
<p>Our customers increasingly use Databricks to analyze petabyte-scale logs in real time. This creates new challenges across the entire data processing pipeline, including ingestion, indexing, processing, and the user experience itself. Our customers are also using Databricks to launch AI/BI, which is redefining Business Intelligence for the AI age. We have several open roles across the teams below:</p>
<ul>
<li>Log Analytics: Our customers increasingly use Databricks to analyze petabyte-scale logs in real time.</li>
<li>AI/BI: AI/BI is redefining Business Intelligence for the AI age.</li>
<li>Unity Catalog Business Semantics: Context is everything for AI. For enterprise data, that context needs to be governed and managed, which is what Unity Catalog Business Semantics offers.</li>
<li>Databricks Apps: Databricks Apps is one of the fastest growing products at Databricks, used by more than 2,500 customers who have created more than 20,000 apps.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience with HTML, CSS, and JavaScript.</li>
<li>Passion for user experience and design and a deep understanding of front-end architecture.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Motivated by delivering customer value.</li>
<li>Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember).</li>
<li>5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go).</li>
<li>Good knowledge of SQL.</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes.</li>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. Canada Pay Range $146,200-$201,100 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099342002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>374022f0-c2a</externalid>
      <Title>Senior Software Engineer, Infrastructure - Platform Compute</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Software Engineer, Infrastructure - Platform Compute to join our team.</p>
<p>As a member of our Platform Product Group, you will be responsible for building a trusted, scalable, and compliant platform to operate with speed, efficiency, and quality.</p>
<p>Our teams build and maintain the platforms critical to the existence of Coinbase.</p>
<p>The Compute team builds and operates the Kubernetes platform at Coinbase, which is the primary compute orchestration infrastructure for services at Coinbase.</p>
<p>You will work towards continuously improving the scalability, reliability, efficiency, and operational experience of using Kubernetes at Coinbase, working closely with the Routing, Security, Reliability, and Observability teams (among many others).</p>
<p>Responsibilities:</p>
<ul>
<li>Build tooling and automation to make management of our Kubernetes clusters easy and reliable.</li>
</ul>
<ul>
<li>Build tooling and automation to improve the developer and operational experience of working with Kubernetes for all users.</li>
</ul>
<ul>
<li>Operationalize our Kubernetes platform so that it continues to be automated and self-healing to prevent unnecessary oncall burden.</li>
</ul>
<ul>
<li>Develop net-new Kubernetes-related capabilities for service owners at Coinbase (e.g. one off jobs, cron, different deployment strategies, support for EFS, automated right sizing).</li>
</ul>
<ul>
<li>Support our customers as they operate critical services for Coinbase in Kubernetes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 5+ years of software engineering experience and experience with Kubernetes, or similar compute orchestration systems (e.g. mesos, nomad)</li>
</ul>
<ul>
<li>Strong AWS and/or GCP infrastructure knowledge</li>
</ul>
<ul>
<li>Ability to build backend services in addition to infrastructure</li>
</ul>
<ul>
<li>Ability to hold a high bar for quality, are a self-starter, and have strong interpersonal skills</li>
</ul>
<ul>
<li>Strong problem-solving skills and ability to identify problems, determine their root cause, and see them through to solution</li>
</ul>
<ul>
<li>Ability to balance business needs with technical solutions</li>
</ul>
<ul>
<li>Has experience scaling backend infrastructure</li>
</ul>
<p>Job #: P74890</p>
<p>*Answers to crypto-related questions may be used to evaluate your on-chain experience.</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$186,065-$218,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$186,065-$218,900 USD</Salaryrange>
      <Skills>Kubernetes, AWS, GCP, Software engineering, Compute orchestration, Automation, Backend services, Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet platform.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7576764</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>262aa1cb-01c</externalid>
      <Title>Head of Corporate Engineering</Title>
      <Description><![CDATA[<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>
<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>
<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>
<p>Responsibilities:</p>
<p>In this influential role, you will be responsible for:</p>
<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>
<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>
<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>
<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>
<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>
<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>
<p>Desired Success Outcomes</p>
<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>
<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>
<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>
<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>
<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>
<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>
<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>
<p>Qualifications:</p>
<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>
<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>
<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>
<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>
<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>
<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>
<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $265,000-$364,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,000-$364,300 USD</Salaryrange>
      <Skills>Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7293607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d9e05746-2aa</externalid>
      <Title>Associate Director, Bioanalytical Development</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Associate Director, Bioanalytical Development</p>
<p><strong>About Formation Bio</strong></p>
<p>Formation Bio is a tech and AI-driven pharma company that accelerates all aspects of drug development and clinical trials.</p>
<p><strong>About the Position</strong></p>
<p>The Associate Director, Bioanalytical will lead bioanalytical strategy and delivery for assigned programs, working across nonclinical and clinical teams to ensure assays are fit for purpose and support key development decisions.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead bioanalytical strategy and delivery for assigned programs across small molecules and biologics, aligned to clinical questions and decision points</li>
<li>Define assay plans and validation approaches (fit-for-purpose through full validation) for PK/TK, metabolites as applicable, biologics bioanalysis, and immunogenicity</li>
<li>Manage bioanalytical CROs and specialty labs for assigned workstreams: contribute to partner selection, define scopes, set timelines, and monitor performance, quality, and budget</li>
<li>Oversee end-to-end bioanalytical execution: protocols, validation reports, sample analysis, data review, and final reports to support program decisions and filings</li>
<li>Own day-to-day BA operating processes for assigned programs: assay lifecycle tracking, method transfer planning, issue triage, deviation management, and CAPA follow-up with vendors</li>
<li>Ensure data quality and audit readiness for assigned deliverables through strong documentation practices, traceability, and alignment to regulatory guidance and internal standards</li>
<li>Partner with Clinical Pharmacology and Clinical Development on study design inputs, sampling strategies, analyte panels, and interpretation of exposure and immunogenicity implications</li>
<li>Coordinate with Clinical Operations on sample logistics (kits, handling, stability, chain of custody, reconciliation) and resolution of operational issues impacting bioanalysis</li>
<li>Collaborate with Regulatory Affairs by drafting/reviewing submission components and providing technical input to support IND/CTA and NDA/BLA/MAA activities</li>
<li>Align with DMPK/Nonclinical on bioanalytical approaches across studies and support translational PK needs</li>
<li>Work with Biostatistics/Data Science to enable clean data transfers, consistent formats/metadata, and analysis-ready datasets</li>
<li>Support diligence and asset evaluation by reviewing bioanalytical packages, identifying gaps/risks, and proposing pragmatic remediation plans</li>
<li>Contribute to Trial Engine scaling by helping standardize templates, workflows, and reporting conventions that improve speed and consistency across programs</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>PhD in a relevant field with 7+ years of bioanalytical experience in biopharma, including clinical-stage support, or a relevant undergraduate degree with 10+ years of experience</li>
<li>Lead bioanalytical strategy and execution for assigned preclinical and clinical programs across small molecules and biologics</li>
<li>Deep LC–MS/MS expertise for PK/TK, troubleshooting, and ISR</li>
<li>Familiar with biologics bioanalysis and immunogenicity (ADA/NAb) approaches and interpretation</li>
<li>Strong working knowledge of bioanalytical regulatory expectations and submission-quality documentation</li>
<li>Lead CRO/vendor execution for assigned workstreams; support partner selection and governance as needed</li>
<li>Comfortable operating in a fast-paced, cross-functional environment with multiple concurrent priorities</li>
<li>Clear communicator who can translate technical detail into program-relevant recommendations</li>
<li>Preferred: Experience functioning as part of a clinical study team to integrate tradeoffs between vendor selection, development speed, assay quality and performance, and applicability of method to deliver value to program overall</li>
<li>Preferred: Familiarity with GCP/GCLP expectations and audit/inspection readiness for clinical bioanalysis</li>
<li>Preferred: Experience shaping sample logistics (kits, stability, chain of custody) and data flow into filings</li>
</ul>
<p><strong>Total Compensation Range</strong></p>
<p>$177,500 - $232,000</p>
<p><strong>Compensation</strong></p>
<p>Individual compensation is determined by several factors, including role scope, geographic location, and skills &amp; experience. Your offer will reflect where you fall within the range based on these considerations. In addition to base salary, we offer equity, comprehensive benefits, and generous perks. If the posted range doesn&#39;t match your expectations, we still encourage you to apply!</p>
<p><strong>Where We Hire</strong></p>
<p>Formation Bio is prioritizing hiring in key hubs, primarily the New York City and Boston metro areas, with a hybrid model requiring 3 days per week in office. Applicants from the Research Triangle (NC) and San Francisco Bay Area may also be considered. Please apply only if you reside in these locations or are willing to relocate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$177,500 - $232,000</Salaryrange>
      <Skills>PhD in a relevant field, Bioanalytical experience in biopharma, LC–MS/MS expertise, Biologics bioanalysis and immunogenicity, Bioanalytical regulatory expectations, Experience functioning as part of a clinical study team, Familiarity with GCP/GCLP expectations, Experience shaping sample logistics and data flow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Formation Bio</Employername>
      <Employerlogo>https://logos.yubhub.co/formation.bio.png</Employerlogo>
      <Employerdescription>Formation Bio is a tech and AI-driven pharma company that accelerates all aspects of drug development and clinical trials.</Employerdescription>
      <Employerwebsite>https://www.formation.bio/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/formationbio/jobs/7530453</Applyto>
      <Location>New York, NY; Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a38ec886-62e</externalid>
      <Title>AI Engineer - FDE (Forward Deployed Engineer)</Title>
      <Description><![CDATA[<p>Mission</p>
<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications.</p>
<p>We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team.</p>
<p>This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote.</p>
<p>The impact you will have:</p>
<ul>
<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>
</ul>
<ul>
<li>Own production rollouts of consumer and internally facing GenAI applications</li>
</ul>
<ul>
<li>Serve as a trusted technical advisor to customers across a variety of domains</li>
</ul>
<ul>
<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>
</ul>
<ul>
<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>
</ul>
<ul>
<li>Minimum of 5+ years of relevant experience as a Data Scientist preferably working in a consulting role</li>
</ul>
<ul>
<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>
</ul>
<ul>
<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>
</ul>
<ul>
<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>
</ul>
<ul>
<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>
</ul>
<ul>
<li>Passion for collaboration, life-long learning, and driving business value through AI</li>
</ul>
<ul>
<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099751002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eed925a1-b05</externalid>
      <Title>Sr. Staff/ Staff Backline Technical Solution engineer</Title>
      <Description><![CDATA[<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a Backline Technical Solutions Engineer, you will help our customers succeed with the Databricks platform by resolving complex technical customer escalations and working closely with the frontline support team.</p>
<p>Your responsibilities will include: Troubleshooting and resolving complex customer issues related to the Databricks Platform by analysing core component metrics and logs. Providing suggestions and best practice guidance for improving performance in customer-specific environments and providing product improvement feedback. Helping the support team with detailed troubleshooting guides and runbooks. Contributing to automation and tooling programs to make daily troubleshooting efficient. Partnering with the engineering team and spreading awareness of upcoming features and releases. Identifying and contributing supportability features back into the product. Demonstrating ownership and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests. Participating in weekend and weekday on-call rotation.</p>
<p>We look for candidates with 12+ years of industry experience, expertise in scripting using Python or Shell, and comfort with black box troubleshooting. Experience with supporting Java, Scala or Python based applications, distributed big data computing environments, SQL-based database systems, Linux and network troubleshooting, and cloud services such as AWS, Azure or GCP is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, Python, Shell, Distributed Big Data Computing, SQL-based Database Systems, Linux, Network Troubleshooting, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8375176002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10290548-1ea</externalid>
      <Title>Solutions Architect - Public Sector (LEAPS)</Title>
      <Description><![CDATA[<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>
<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>
<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data engineering, data analytics, and data science and machine learning.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8320126002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e22b8bd1-f7a</externalid>
      <Title>Staff Product Manager, Serverless Workspaces</Title>
      <Description><![CDATA[<p>At Databricks, we are building the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. The Serverless Workspaces team is the engine behind Databricks&#39; shift from a &#39;configure-first&#39; to a &#39;use-now&#39; platform. We are redefining the customer onboarding experience by removing the heavy lifting of cloud infrastructure without complicated networking, storage, and cluster configuration, just instant access to data and AI.</p>
<p>You will own the strategy for this next-generation platform layer, balancing the simplicity of a SaaS experience with the control enterprise customers demand. The impact you will have:</p>
<ul>
<li>Drive the transition to Serverless: Lead the strategy to unify the journey to onboard to serverless and classic workspaces and drive 10X usage of serverless in the next year</li>
<li>Democratize Workspace Creation: Design and ship flows that allow users to spin up workspaces instantly with little friction while maintaining strict governance guardrails and company policies</li>
<li>Redefine the &#39;Getting Started&#39; experience: Lower the barrier to entry by removing the requirement for customers to manage detailed cloud infrastructure configurations before using Databricks but allowing them dial those in when they&#39;re ready</li>
<li>Solve &#39;Workspace Proliferation&#39;: Help define the tools and policies that allow Admins to confidently govern increased amounts of workspaces across the enterprise</li>
<li>Unify the Data Estate: Work closely with the Unity Catalog and Identity teams to ensure that these new serverless environments seamlessly integrate with a customer&#39;s existing data and security models</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience as a Product Manager working on cloud infrastructure, developer platforms, or SaaS foundations</li>
<li>Technical depth in Cloud Infrastructure: Familiarity with AWS, Azure, or GCP resource management (e.g. networking, compute, identity) and how to abstract that complexity for end-users</li>
<li>Passion for simplification: A track record of taking complex technical workflows (like configuring a VPC or peering) and turning them into &#39;one-click&#39; consumer-grade experiences</li>
<li>Data-driven mindset: Comfortable defining and tracking KPIs, such as &#39;Time to First Workspace&#39; or &#39;Serverless Adoption Rate,&#39; to measure success</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Cloud Infrastructure, Developer Platforms, SaaS Foundations, AWS, Azure, GCP, Networking, Compute, Identity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8420607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5196c4ac-d97</externalid>
      <Title>Senior Software Engineer - Infrastructure and Tools</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>
<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>
<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>
<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6318503002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb18189c-d78</externalid>
      <Title>Solutions Architect (Pre-sales) - Kansai Region</Title>
      <Description><![CDATA[<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>
<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>
<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your prospects through evaluating and adopting Databricks</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>
<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>
<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>
<li>Experience working with Enterprise Accounts</li>
<li>Written and verbal fluency in Japanese</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437028002</Applyto>
      <Location>Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65befd80-0e2</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Staff-level backend software engineer to join our Live Pay team. You&#39;ll work cross-functionally with various teams and contribute to the design and development of key platform services. This person must be strong in JVM languages and event-driven architecture on AWS.</p>
<p>The Canada base salary range for this full-time position is $252,000-$308,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. This role will be hybrid from our Vancouver, CAN office, with 2 days a week in the office required.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the design and implementation of new features. Break down complex problems into their bare essentials, translate this complexity into elegant design, and create high-quality, clean code.</li>
</ul>
<ul>
<li>Make a meaningful impact on the lives of our community members.</li>
</ul>
<ul>
<li>Design, develop, and deliver large-scale systems.</li>
</ul>
<ul>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
</ul>
<ul>
<li>Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success.</li>
</ul>
<ul>
<li>Estimate and manage team project timelines and risks.</li>
</ul>
<ul>
<li>Care passionately about producing high-quality, efficient designs and code.</li>
</ul>
<ul>
<li>Constantly learning about new technologies and industry standards in software engineering.</li>
</ul>
<ul>
<li>Work cross-functionally with other teams, including: Analytics, design, product, marketing, and data science.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of development experience in backend software development</li>
</ul>
<ul>
<li>Bachelor&#39;s, Master’s, or PhD in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
</ul>
<ul>
<li>Proficiency in at least one modern programming language, such as Java, Kotlin, Scala, or C#, and experience with at least one major framework such as Spring, Spring Boot, or ASP.NET Core.</li>
</ul>
<ul>
<li>Hands-on experience working in cloud environments: AWS, GCP, or Azure</li>
</ul>
<ul>
<li>Proficiency in event-driven systems such as Kafka, SQS, SNS, or Kinesis, and experience designing and operating scalable distributed systems.</li>
</ul>
<ul>
<li>Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations</li>
</ul>
<ul>
<li>Hands-on experience working with various databases. DynamoDB, MySQL, ElasticSearch</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) to improve engineering productivity</li>
</ul>
<ul>
<li>Experience with continuous integration and delivery tools, and experience in developing and executing functional and integration tests.</li>
</ul>
<ul>
<li>Familiarity with a clean architecture approach and software craftsmanship</li>
</ul>
<ul>
<li>Experience with Kubernetes and microservice architecture is a strong plus.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$252,000-$308,000</Salaryrange>
      <Skills>Java, Kotlin, Scala, C#, Spring, Spring Boot, ASP.NET Core, AWS, GCP, Azure, Kafka, SQS, SNS, Kinesis, DynamoDB, MySQL, ElasticSearch, AI-assisted development tools, Continuous integration and delivery tools, Clean architecture approach, Software craftsmanship, Kubernetes, Microservice architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, delivering real-time financial flexibility for individuals living paycheck to paycheck. It has a healthy core business with a significant runway.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7680387</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0373d52-7fe</externalid>
      <Title>Senior IAM Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior IAM Engineer to join our team. As a Senior IAM Engineer, you will play a critical role in securing our systems and data. You will have the opportunity to work with cutting-edge IAM technologies, collaborate with cross-functional teams, and influence the development of our IAM strategy.</p>
<p>Your primary focus will be on designing and implementing identity lifecycle management, integration and orchestration, access governance, security and compliance, custom tooling, and data and AI infrastructure support. You will also be responsible for collaborating with cross-functional teams, improving provisioning and deprovisioning processes, integrating and managing IdPs within the IAM system, handling and streamlining access requests, developing and implementing IAM policies and procedures, and responding to ad-hoc requests.</p>
<p>To be successful in this role, you will need to have a strong understanding of identity lifecycle management, directory services, SSO, MFA, SCIM provisioning, and federation (SAML, OIDC, OAuth). You will also need to have experience partnering with HR, Finance, Compliance, and other cross-functional teams to design and implement IAM and enterprise solutions.</p>
<p>Additional skills and experience we&#39;d prioritize include experience with Workato or similar integration orchestrator tools, experience with Okta Workflows, certifications such as Workato or Okta Certified Professional/Administrator/Consultant, experience integrating IAM with HR systems, knowledge of compliance requirements related to IAM, and background in cloud platforms (AWS, GCP, Azure) and IAM integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scripting, Automation Mindset, APIs, Infrastructure as Code, Security Mindset, Identity and Access Management, Okta, Workday, Google Workspace, SCIM provisioning, Federation (SAML, OIDC, OAuth), Directory services, SSO, MFA, Workato, Okta Workflows, Certifications (Workato or Okta Certified Professional/Administrator/Consultant), Experience integrating IAM with HR systems, Knowledge of compliance requirements related to IAM, Background in cloud platforms (AWS, GCP, Azure) and IAM integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8393728002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35458586-a42</externalid>
      <Title>Enterprise Architect, Finance &amp; Legal Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Enterprise Architect to join our Technology, Data and Intelligence team. As an Enterprise Architect, you will be responsible for defining and delivering the technology architecture strategy across Finance and Legal functions, enabling data-driven decision-making, automation, and operational excellence.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Defining the target-state architecture for Finance and Legal applications, ensuring alignment with enterprise strategy and growth objectives.</li>
<li>Leading the design and implementation of end-to-end architectural solutions for Finance and Legal systems, ensuring integration, scalability, and performance across the enterprise.</li>
<li>Developing and maintaining a multi-year roadmap for modernization across ERP, FP&amp;A, Legal, and Sales Compensation systems.</li>
<li>Ensuring systems are designed with identity-first security principles, integrating with Okta and other IAM solutions for authentication, authorization, and compliance.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>15+ years of software engineering experience, including significant time as an Architect or Principal in ERP Systems (Oracle/Netsuite/SAP), FP&amp;A Systems (Anaplan) and/or CLM systems (Aptus/Conga/Ironclad).</li>
<li>Excellent storytelling and communication skills,comfortable presenting to both technical and executive stakeholders.</li>
<li>Multiple ERP (Oracle or Netsuite) full cycle implementation experience.</li>
<li>Deep understanding of the Finance business process areas – Order to Cash, Record to Report, Source to Pay, Plan to Report (FP&amp;A), Treasury, Credit Collection, Revenue Recognition, and Subscription Billing, Contract Life Cycle Mgmt within Legal Ops.</li>
<li>Demonstrated hands-on experience architecting functional and technical solutions within major business applications, with specific expertise in NetSuite (or Oracle), Aptus/Conga (or IronClad), Anaplan, Coupa, Scout, Tax engines such as Avalara, Vertex or OneSource – including understanding their data models and APIs in context of solution development and integrations.</li>
<li>Architected and delivered AI Agents using leading LLMs Gemini, OpenAI or Claude.</li>
<li>Experience with managing a Software and/or Vendor selection keeping in view the end state architecture of the enterprise.</li>
<li>Proficient understanding of middlewares such as MuleSoft, Workato, Boomi, or Informatica for connecting Finance, Legal, CRM, and data platforms.</li>
<li>Familiar with code, configuration, and system performance standards/reviews to ensure quality, scalability, and compliance with enterprise standards.</li>
<li>Proficiency with AWS, Azure, or GCP, with knowledge of data lakes/warehouses (Snowflake, Redshift, BigQuery) for SaaS revenue and compliance analytics.</li>
<li>Identity &amp; Security: knowledge of SSO, OAuth, SAML, SCIM, and Zero Trust principles, with hands-on integration experience in Okta or similar IAM platforms.</li>
</ul>
<p>In addition to the above skills and experience, the ideal candidate will be passionate about innovation, AI adoption, and continuous improvement aligned with Okta’s mission to build secure, intelligent, and connected business systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$150,000 - $250,000 per year</Salaryrange>
      <Skills>Enterprise Architecture, Cloud Computing, Identity and Access Management, Security, Data Analytics, Machine Learning, Artificial Intelligence, Software Development, DevOps, Agile Methodologies, AWS, Azure, GCP, Snowflake, Redshift, BigQuery, MuleSoft, Workato, Boomi, Informatica</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorisation services to organisations.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7442186</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ed46937-df6</externalid>
      <Title>Staff Developer Success Engineer - West</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>
<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>
<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>
<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>
<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>
<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>
<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed4bd662-c67</externalid>
      <Title>Senior Solutions Architect, Commercial - San Francisco</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>
<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>
<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>
<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>
<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>
<p>Must reside in San Francisco, CA</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000 - $250,000 OTE</Salaryrange>
      <Skills>Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that can simplify code, make applications more reliable, and help developers focus on the important things like delivering features faster. It is growing and building the team that will make that happen.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>20fef61c-c3c</externalid>
      <Title>Partner Solutions Engineer, UK&amp;I</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.</p>
<p>If you’re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you’ll fit right in.</p>
<p>Available Locations: London</p>
<p>About Solutions Engineering at Cloudflare</p>
<p>The Pre-Sales Solution Engineering organization owns the technical sale of the Cloudflare solution portfolio, ensuring maximal business value, fit-for-purpose solution design and adoption roadmap for our customers. Solutions Engineering is made up of individuals from a wide range of backgrounds - from Financial Consulting to Product Management, Customer Support to Software Engineering, and we are serious about building a diverse, experienced and curious team.</p>
<p>The Partner Solutions Engineer is an experienced PreSales role within the Solutions Engineering team. Partner Solutions Engineers work closely with our partners to educate, empower, and ensure their success delivering Cloudflare security, reliability and performance solutions.</p>
<p>What you&#39;ll do as a Partner Solutions Engineer</p>
<p>Your role will be to build passionate champions within the technology ranks at your Partner accounts, aid your Partner organizations to drive sales for identified opportunities, and collaborate with your technical champions to build revenue pipeline. As the technical partner advocate within Cloudflare, you will work closely with every team at Cloudflare, from Sales and Product, through to Engineering and Customer Support.</p>
<p>You have strong experience in large Pre-Sales partner and account management as well as excellent verbal and written communications skills in English, suited for both technical and executive-level engagement. You are comfortable speaking about the Cloudflare vision and mission with all technical and non-technical audiences. Ultimately, you are passionate about technology and have the ability to explain complex technical concepts in easy-to-understand terms.</p>
<p>You are naturally curious, and an avid builder who is not afraid to get your hands dirty. You appreciate the diversity of challenges in working with partners and customers, and look forward to helping them realize the full promise of Cloudflare.</p>
<p>On the Solutions Engineering team, you will find a collaborative environment where everyone brings different strengths and jumps in to help each other. Specifically, we are looking for you to:</p>
<ul>
<li>Build and maintain long term technical relationships with our EMEA partners to increase Cloudflare’s reputation and authority within the partner solution portfolio through demonstrating value, enablement, and uncovering new areas of potential revenue</li>
</ul>
<ul>
<li>Drive technical solution design conversations and guide partners in EMEA through use case qualification and collaborative technical wins through demonstrations and proofs-of-concepts</li>
</ul>
<ul>
<li>Evangelize and represent Cloudflare through technical thought leadership and expertise</li>
</ul>
<ul>
<li>Be the voice of the partner internally at Cloudflare, engaging with and influencing Cloudflare’s Product and Engineering teams to meet your partner and customer needs</li>
</ul>
<p>Travel up to 40% throughout the quarter to support partner engagements, attend conferences and industry events, and to collaborate with your Cloudflare teammates</p>
<p>Examples of desirable skills, knowledge and experience:</p>
<ul>
<li>Fluency in English (verbal and written)</li>
</ul>
<ul>
<li>Experience managing technical sales within large partners and accounts:</li>
</ul>
<ul>
<li>Developing champion-style relationships</li>
</ul>
<ul>
<li>Driving technical wins</li>
</ul>
<ul>
<li>Assisting with technical validation</li>
</ul>
<ul>
<li>Experience and expertise in one or more of the core industry components of Cloudflare solutions:</li>
</ul>
<ul>
<li>SASE concepts and Zero Trust Networking architectures</li>
</ul>
<ul>
<li>Networking technologies including TCP, UDP, DNS, IPv4 + IPv6, BGP routing, GRE, SD-WAN, MPLS, Global Traffic Management</li>
</ul>
<ul>
<li>Internet security technologies including DDoS and DDoS mitigation, Firewalls, TLS, VPN, DLP</li>
</ul>
<ul>
<li>Detailed understanding of workflow from user to application including hybrid architectures with Azure, AWS, GCP</li>
</ul>
<ul>
<li>HTTP technologies including reverse proxy (e.g., WAF and CDN), forward proxy (secure web gateway), serverless application development</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Fluency in English (verbal and written), Experience managing technical sales within large partners and accounts, Developing champion-style relationships, Driving technical wins, Assisting with technical validation, SASE concepts and Zero Trust Networking architectures, Networking technologies including TCP, UDP, DNS, IPv4 + IPv6, BGP routing, GRE, SD-WAN, MPLS, Global Traffic Management, Internet security technologies including DDoS and DDoS mitigation, Firewalls, TLS, VPN, DLP, Detailed understanding of workflow from user to application including hybrid architectures with Azure, AWS, GCP, HTTP technologies including reverse proxy (e.g., WAF and CDN), forward proxy (secure web gateway), serverless application development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7210482</Applyto>
      <Location>Hybrid; In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>12eeb115-0aa</externalid>
      <Title>Staff+ Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>
<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>
<li>Define infrastructure architecture, ensuring the hardest problems get solved , whether by you directly or by working through others</li>
<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>
<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience</li>
<li>Led complex, multi-quarter technical initiatives that span multiple teams or systems</li>
<li>Can set technical direction for a team, not just execute within it</li>
<li>Deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>
<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual Salary: $405,000-$485,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, systems language, Python, Rust, Go, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108817008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90423d85-ea7</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>
<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>
<ul>
<li>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources.</li>
<li>Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs.</li>
<li>Seamless onboarding and management for all members of an organisation to become data-driven.</li>
<li>Provide a great SQL-centric data exploration and dashboarding experience on Databricks.</li>
<li>An interactive environment for collaborative data projects at massive scale with an easy path to production.</li>
</ul>
<p>We are looking for engineers with 5+ years of experience with HTML, CSS, and JavaScript, passion for user experience and design, and a deep understanding of front-end architecture. You should be comfortable working towards a multi-year vision with incremental deliverables, motivated by delivering customer value, and experienced with modern JavaScript frameworks and server-side web technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, SQL, Cloud technologies (AWS, Azure, GCP, Docker, or Kubernetes), Modern JavaScript frameworks (React, Angular, or VueJs/Ember), Server-side web technologies (Node.js, Java, Python, Scala, C#, C++, Go)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best Data Intelligence Platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5445641002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad10ca72-6ab</externalid>
      <Title>Staff Security Engineer, IAM</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Staff Security Engineer, IAM to lead the architectural vision and security engineering execution for Coinbase’s Identity and Access Management (IAM) and workforce security platforms.</p>
<p>As a senior technical leader within the IAM program, you will partner with Engineering, IT, Platform, and business teams to architect and deliver identity solutions that balance zero-trust security with workforce enablement, reduce insider risk, and satisfy global regulatory requirements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the architectural vision and security engineering execution for Coinbase’s IAM and workforce security platforms</li>
<li>Evaluate, design, and implement “build, buy, or hybrid” strategies for workforce Identity Governance and Administration (IGA)</li>
<li>Write high-quality code to build scalable automation, custom integrations, and self-service guardrails</li>
<li>Conduct comprehensive threat modeling and security architecture reviews for foundational identity systems and critical SaaS integrations</li>
<li>Partner with Engineering, IT, HR, AI/ML, and Product teams to align security initiatives with business goals</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of proven experience in software engineering, security engineering, or systems architecture</li>
<li>Proficient in at least one programming language (e.g., Python, Go)</li>
<li>Demonstrated track record of successfully implementing complex hybrid IAM infrastructures</li>
<li>Deep operational and architectural understanding of Identity Governance and Administration (IGA) processes</li>
<li>Extensive expertise in modern identity protocols (SAML, OAuth2, OIDC, SCIM), cloud IAM (AWS and GCP), and dynamic access control frameworks (RBAC, ABAC, ReBAC)</li>
<li>Strong background in applied risk management, automated threat modeling, and zero-trust architecture principles</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience operating in a hyper-growth tech, FinTech, or crypto environment</li>
<li>Experience governing non-FTE workforce populations at scale</li>
<li>Hands-on experience with Policy-as-Code paradigms and integrating machine learning to automate policy generation</li>
</ul>
<p>Pay Transparency Notice: The target annual base salary for this position can range from $218,025 to $256,500 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>Identity and Access Management, Security Engineering, Software Engineering, Systems Architecture, Python, Go, SAML, OAuth2, OIDC, SCIM, AWS, GCP, RBAC, ABAC, ReBAC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7763274</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe04c8cc-782</externalid>
      <Title>Forward Deployed Engineering Manager</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>
<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>
<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>
<p>Why Join Us</p>
<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>
<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>
<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>
<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>
<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>
<p>The role</p>
<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>
<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>
<p>What You’ll Do</p>
<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>
<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>
<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>
<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>
<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>
<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>
<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>
<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>
<p>What We’re Looking For</p>
<p>5+ years of software engineering experience (Python)</p>
<p>2+ years of experience managing or leading engineers in fast-paced environments</p>
<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>
<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>
<p>Background in infrastructure, developer tooling, or distributed systems</p>
<p>Strong debugging skills and systems thinking across layered, containerized environments</p>
<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>
<p>Excellent communication and stakeholder management skills</p>
<p>Preferred</p>
<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>
<p>Familiarity with cloud infrastructure (GCP or AWS)</p>
<p>Prior experience in AI/ML platforms, data companies, or research environments</p>
<p>Contributions to open-source projects in RL, agents, or developer tooling</p>
<p>Why This Role Matters</p>
<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>
<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>
<p>About Alignerr</p>
<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>
<p>Life at Labelbox</p>
<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>
<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>
<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>
<p>Growth: Career advancement opportunities directly tied to your impact</p>
<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>
<p>Our Vision</p>
<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>
<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>
<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$220,000 USD</Salaryrange>
      <Skills>Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a data-centric AI development company that provides critical infrastructure for breakthrough AI models.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5101195007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ceb4835-0f1</externalid>
      <Title>Manager, Professional Services</Title>
      <Description><![CDATA[<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>
<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>
<li>6+ years of experience working on Big Data Architectures independently.</li>
<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>
<li>Experience working on Databricks platform is a plus.</li>
<li>Documentation and white-boarding skills.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8503068002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e948a283-667</externalid>
      <Title>Staff Software Engineer, Platform Security</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Platform Security Engineering team. As a key member of this team, you will be responsible for advancing our mission through security expertise, software development, and operational excellence.</p>
<p>In this technical leadership role, you will articulate and pursue the most leveraged opportunities to reduce security risk across Engineering, designing and building lovable &#39;paved paths&#39; for managing identities and access, shipping code, configuring cloud infrastructure, and operating services.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and applying best-in-class secure baselines for cloud infrastructure</li>
<li>Securing first- and third-party software supply chains, from the dev environment through CI/CD and into production</li>
<li>Building and owning identity and access management (IAM) systems that are user-friendly and promote least privilege</li>
<li>Managing infrastructure vulnerabilities while supporting rapid growth for Engineering</li>
<li>Consulting on risk assessments, architectural designs, threat models, code reviews, and more,pragmatically balancing security with other business considerations</li>
</ul>
<p>Example projects include:</p>
<ul>
<li>Supporting IAM with scalable platform solutions</li>
<li>Building tooling to prevent and address vulnerabilities across our infrastructure</li>
<li>Integrating service-to-service authentication and authorization into Discord&#39;s internal developer platform</li>
</ul>
<p>What we look for in a candidate includes:</p>
<ul>
<li>5+ years of experience building and operating production systems or infrastructure</li>
<li>5+ years of experience writing software in a general-purpose programming language</li>
<li>4+ years of experience securing systems with millions of users</li>
<li>Experience mentoring junior ICs and leading technical projects involving multiple engineers and spanning multiple quarters</li>
<li>Experience designing and building software for customers (internal or external) beyond your immediate team</li>
<li>Experience securing cloud environments</li>
<li>Experience defining and orchestrating containers</li>
<li>Familiarity with build and CI/CD technologies</li>
<li>Understanding of modern authentication and authorization concepts</li>
</ul>
<p>Bonus points if you have experience developing and debugging distributed systems atop GCP and Cloudflare, leading complex migrations or risk management programs across an engineering organization, or managing and securing VMs or bare-metal hosts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$248,000 to $279,000 + equity + benefits</Salaryrange>
      <Skills>cloud infrastructure, identity and access management, software development, operational excellence, security expertise, container orchestration, build and CI/CD technologies, modern authentication and authorization concepts, distributed systems, GCP and Cloudflare</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8177912002</Applyto>
      <Location>San Francisco Bay Area or Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3f48b4f4-789</externalid>
      <Title>Manager, Detection &amp; Incident Response</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled and detail-oriented technical leader to own the day-to-day operations of our Detection and Incident Response team. You&#39;ll be responsible for driving our SIEM and SOAR capabilities and incident response program, partnering with teams throughout Squarespace to improve how we spot and respond to threats.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing to define, collect, and analyze security KPIs and KRIs for the security organization.</li>
<li>Developing and implementing a comprehensive detection and response strategy and roadmap aligned with Squarespace&#39;s overall business objectives and risk appetite.</li>
<li>Overseeing the Security Operations Center (SOC) activities, including threat detection, monitoring, analysis, and proactive hunting.</li>
<li>Owning the health and effectiveness of the SIEM and SOAR platforms, ensuring high-quality data ingestion, alert tuning, and automated response logic.</li>
<li>Establishing and maintaining a robust incident response program, including defining incident playbooks, leading major incident investigations, and conducting post-incident reviews to drive continuous improvement.</li>
<li>Designing and leading regular tabletop exercises to test the organization&#39;s readiness for various incident scenarios.</li>
<li>Serving as the Incident Commander for major security events, coordinating with teams such as Legal, Communications, and HR to ensure clear internal communication and regulatory compliance.</li>
<li>Identifying, evaluating, and implementing new security technologies and tools to enhance detection, prevention, and response capabilities.</li>
<li>Driving continuous improvement of security operations processes through automation, tooling, and best practices.</li>
<li>Staying abreast of emerging security threats, vulnerabilities, and industry trends and proactively advising leadership on necessary adjustments to strengthen Squarespace&#39;s security posture.</li>
<li>Building, mentoring, and leading a high-performing team of security professionals, fostering a culture of continuous learning, collaboration, and accountability.</li>
<li>Acting as a key liaison and trusted advisor to internal stakeholders on security-related matters.</li>
<li>Managing relationships with external security vendors and partners, ensuring effective service delivery and technology adoption.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, Information Security, or a related field (or equivalent experience).</li>
<li>7+ years of experience in cybersecurity, with at least 2 years in a leadership or team-lead role.</li>
<li>Deep expertise in Incident Response and Detection Engineering.</li>
<li>Strong knowledge of cloud security operations, specifically within AWS or GCP environments.</li>
<li>Hands-on experience managing and tuning SIEM and SOAR platforms.</li>
<li>Experience automating security workflows and incident response playbooks to reduce manual effort.</li>
<li>Familiarity with security frameworks such as MITRE ATT&amp;CK and NIST.</li>
<li>Excellent communication skills with the ability to lead technical teams during high-pressure incidents and explain complex threats to non-technical stakeholders.</li>
<li>Knowledge of software development, design, and technical operations.</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Health insurance with 100% covered premiums for you, your spouse or partner, and your dependent children.</li>
<li>Life and income protection.</li>
<li>Fertility and adoption benefits.</li>
<li>Headspace mindfulness app subscription.</li>
<li>Global Employee Assistance Program.</li>
<li>Pension benefits with employer match.</li>
<li>Flexible paid time off.</li>
<li>26 weeks paid maternity leave and 12 weeks paid paternity leave.</li>
<li>2 weeks paid family care leave.</li>
<li>Education reimbursement.</li>
<li>Employee donation match to community organizations.</li>
<li>7 Global Employee Resource Groups (ERGs).</li>
<li>Free lunch and snacks.</li>
<li>Close proximity to cultural landmarks such as Dublin Castle and St. Patrick&#39;s Cathedral.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SIEM, SOAR, Incident Response, Cloud Security Operations, AWS, GCP, Security Frameworks, MITRE ATT&amp;CK, NIST, Software Development, Design, Technical Operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and is headquartered in New York City, with offices in Dublin, Ireland, and Aveiro, Portugal.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7773251</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0036f074-845</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456966002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8317ba42-502</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>
<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
</ul>
<ul>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
</ul>
<ul>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
</ul>
<ul>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
</ul>
<ul>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
</ul>
<ul>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
</ul>
<ul>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
</ul>
<ul>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
</ul>
<ul>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
</ul>
<ul>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>
</ul>
<ul>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
</ul>
<ul>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
</ul>
<ul>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
</ul>
<ul>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
</ul>
<ul>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
</ul>
<ul>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
</ul>
<ul>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8041698002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a7cad02-cd5</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494155002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5cad560f-dc3</externalid>
      <Title>Engineering Manager, Cloud Networking (Brazil)</Title>
      <Description><![CDATA[<p>You will join Airbnb&#39;s mission-driven company dedicated to helping create a world where anyone can belong anywhere. As the first Network engineering lead in Airbnb&#39;s Brazil office, you will be responsible for bootstrapping and growing the networking team in our new San Paulo office.</p>
<p>Your primary focus will be on delivering an Airbnb network platform that is flexible, efficient, always available, and scales with the needs of the business. You will work closely with peers across Cloud Infra, Security, Reliability, and many other partner teams across the company to achieve this goal.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Providing meaningful input to technical designs and direct hands-on contributions to projects in the cloud networking space</li>
<li>Growing, leading, and managing a small team of talented engineers</li>
<li>Supporting your team&#39;s professional growth and maintaining high performance through mentorship and coaching</li>
<li>Working with tech leads, peers, and partners to define and execute on a coherent vision and roadmap for Airbnb&#39;s cloud network infrastructure and related components</li>
<li>Working with open source communities (e.g. istio) to build the next generation service mesh for all Airbnb back-end services</li>
<li>Building cross-region gateways and load balancers for global Airbnb services</li>
<li>Working with external partners and internal engineering and security teams to deliver edge security systems that protect Airbnb services</li>
<li>Nurturing a culture of technical quality from design, through code review, to production</li>
<li>Building strong partnership and alignment with teams across engineering</li>
<li>Nurturing relationships with open source communities and external service partners</li>
</ul>
<p>As a successful candidate, you will have a strong background in engineering management, with 2+ years of experience and 8+ years of relevant software development experience in a fast-paced tech environment. You will also have experience with a public cloud provider (AWS, GCP, Azure) and their networking service offerings, as well as experience running large-scale networking systems and software (e.g. proxies, DNS, gateways).</p>
<p>Additionally, you will have excellent communication skills and the ability to work well with teams across the engineering organization (e.g. reliability, compute, security, etc.). You will also have strong problem-solving skills and experience leading teams on-call for production infrastructure.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Professional fluency in English, 2+ years of engineering management experience, 8+ years of relevant software development experience in a fast-paced tech environment, Experience with a public cloud provider (AWS, GCP, Azure) and their networking service offerings, Experience running large-scale networking systems and software (e.g. proxies, DNS, gateways), Experience with Istio service mesh, k8s and cloud native technologies, Excellent communication skills and the ability to work well with teams across the engineering organization, Strong problem-solving skills and experience leading teams on-call for production infrastructure, Experience with open source communities (e.g. istio), Experience building cross-region gateways and load balancers for global services, Experience working with external partners and internal engineering and security teams to deliver edge security systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a company that allows users to book unique stays and experiences in almost every country across the globe. It has grown to over 5 million hosts who have welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7381450</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7772798b-532</externalid>
      <Title>Staff Software Engineer - Java( Backend Architect )</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Staff Software Engineer to work on our Advanced Apps team with focus on enhancing and managing connectors to SaaS applications e.g., Workday, Salesforce, GCP, AWS, etc. They will work closely with the Lifecycle Management (LCM) team that provides a platform for automating Joiner, Mover, Leaver processes. The Connectors allow customers the flexibility to Import and Provision identity and entitlements to their SaaS applications. This role is to build, design solutions, and maintain our connectors to match application&#39;s features and for scale.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Interface with cross-functional teams (Architects, QA, Product, Technical Support, Documentation, and UX teams) to understand application specific protocols and build connectors</li>
<li>Analyze/Refine Requirements with Product Management.</li>
<li>Quick prototyping to validate scale and performance.</li>
<li>Design &amp; Implement features with functional and unit tests along with monitoring and alerts</li>
<li>Conduct code reviews, analysis and performance tuning</li>
<li>Work with QA team to outline and implement comprehensive test coverage for application specific features</li>
<li>Troubleshooting and support for customer issues and debugging from logs (Splunk, Syslogs, etc.)</li>
<li>Provide technical leadership and mentorship to more junior engineers</li>
</ul>
<p>Required knowledge, skills, and abilities:</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>8+ years of Software Development in Java, preferably significant experiences with SCIM and Spring Boot.</li>
<li>5+ years of development experience building services, internal tools and frameworks</li>
<li>2+ years experience automating and deploying large scale production services in AWS, GCP or similar.</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures</li>
<li>Experience with RESTful APIs and SOAP apis.</li>
<li>Ability to work effectively with distributed teams and people of various backgrounds</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, SCIM, Spring Boot, AWS, GCP, RESTful APIs, SOAP apis, Caching, Stream processing, Resilient architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is the leading independent provider of enterprise identity. The company provides a platform for organisations to securely connect the right people to the right technologies at the right time.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6883425</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a03720f6-bc3</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>As a Solutions Architect at Databricks, you will partner with our customers to design scalable data architectures using Databricks technology and services.</p>
<p>You have technical depth and business knowledge and can drive complex technology discussions which express the value of the Databricks platform throughout the sales lifecycle.</p>
<p>In partnership with our Account Executives, you will engage with our customers&#39; technical leads, including architects, engineers, and operations teams with the goal of establishing yourself as a trusted advisor to achieve tangible outcomes.</p>
<p>You will work with teams across Databricks and our executive leadership to represent your customer&#39;s needs and build valuable customer engagements and report to the Field Engineering Manager.</p>
<p>The impact you will have:</p>
<ul>
<li>Work with Sales and other essential partners to develop account strategies for your assigned accounts to grow their usage of the platform.</li>
</ul>
<ul>
<li>Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent technical account planning.</li>
</ul>
<ul>
<li>Build and present reference architectures and demo applications for prospects to help them understand how Databricks can be used to achieve their goals to land new users and use cases.</li>
</ul>
<ul>
<li>Capture the technical win by consulting on big data architectures, data engineering pipelines, and data science/machine learning projects; prove out the Databricks technology for strategic customer projects; and validate integrations with cloud services and other 3rd party applications.</li>
</ul>
<ul>
<li>Become an expert in, and promote Databricks inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a customer-facing pre-sales, technical architecture, or consulting role with expertise in at least one of the following technologies:</li>
</ul>
<ul>
<li>Big data engineering (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing &amp; ETL (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<ul>
<li>Data Applications (Ex: Logs Analysis, Threat Detection, Real-time Systems Monitoring, Risk Analysis and more)</li>
</ul>
<ul>
<li>Experience translating a customer&#39;s business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business.</li>
</ul>
<ul>
<li>Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.</li>
</ul>
<ul>
<li>Fluent in SQL and database technology.</li>
</ul>
<ul>
<li>Debug and development experience in at least one of the following languages: Python, Scala, Java, or R.</li>
</ul>
<ul>
<li>Desired: Built solutions with public cloud providers such as AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Desired: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</li>
</ul>
<ul>
<li>Travel to customers in your region up to 30% of the time.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,500-$224,000 CAD</Salaryrange>
      <Skills>Big data engineering, Data Warehousing &amp; ETL, Data Science and Machine Learning, Data Applications, SQL and database technology, Python, Scala, Java, or R, Built solutions with public cloud providers such as AWS, Azure, or GCP, Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5898477002</Applyto>
      <Location>Toronto, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc79e6e5-5c0</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, Data engineering, Data science, Cloud technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494156002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2ace8872-f7e</externalid>
      <Title>Manager, Backline (Platform)</Title>
      <Description><![CDATA[<p>At Databricks, we are seeking a Manager, Backline (Platform) to join our team. As a critical bridge between Engineering and Frontline Support, the Backline Engineering Team handles complex technical issues and escalations across the Apache Spark ecosystem and the Databricks Platform stack. With a strong focus on customer success, we are committed to delivering exceptional customer satisfaction by providing deep technical expertise, proactive issue resolution, and continuous improvements to the platform.</p>
<p>The Manager, Backline (Platform) will be responsible for:</p>
<ul>
<li>Hiring and developing top talent to build an outstanding team</li>
<li>Mentoring engineers, providing clear feedback, and developing future leaders in the team</li>
<li>Establishing and maintaining high standards in troubleshooting, automation, and tooling to improve efficiency</li>
<li>Working closely with Engineering to enhance observability, debugging tools, and automation, reducing escalations</li>
<li>Collaborating with Frontline Support, Engineering, and Product teams to improve customer escalations and support processes</li>
<li>Defining a long-term roadmap for Backline, focusing on automation, tool development, bug fixing, and proactive issue resolution</li>
<li>Taking ownership of high-impact customer escalations by leading critical incident response during Databricks runtime outages and major incidents</li>
<li>Participating in weekday and weekend on-call rotations, ensuring fast and effective resolution of urgent issues</li>
</ul>
<p>We look for candidates with 10-12 years of industry experience, at least 3+ years in a managerial role, and strong technical expertise in one of the following domains: Linux/OS and Network troubleshooting, AWS, Azure, or GCP Cloud and related services, SQL-based database systems, or Python and/or Java-based applications.</p>
<p>If you are a motivated and experienced professional with a passion for delivering exceptional customer satisfaction, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux/OS and Network troubleshooting, AWS, Azure, or GCP Cloud and related services, SQL-based database systems, Python and/or Java-based applications, Troubleshooting, Automation, Tooling, Observability, Debugging, Collaboration, Leadership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7879639002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>02ba8342-079</externalid>
      <Title>Specialist Solutions Architect - Data Warehousing (Healthcare &amp; Life Sciences)</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>
<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>
<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>
<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
<li>Contribute to the Databricks Community</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>
<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective 	+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools 	+ Experience migrating on-premise EDW workloads to the public cloud</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Production programming experience in SQL and Python, Scala, or Java</li>
<li>Experience with the AWS, Azure, or GCP clouds</li>
<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>
<li>2 years customer-facing experience in a pre-sales or post-sales role</li>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8337429002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>47272a53-cc9</externalid>
      <Title>Engineering Manager - Databricks SQL Control Plane</Title>
      <Description><![CDATA[<p>We are seeking an Engineering Manager to spearhead the development of a new service and architecture for the next generation of our product, Databricks SQL Control Plane. As an Engineering Manager, you will lead a team of talented software engineers to build the next-generation low-latency, multi-tenanted, cloud native data warehousing system. Your responsibilities will include growing the team by hiring strong engineering talent, leading and participating in technical, product and design discussions relating to cloud native database systems, managing and operating a highly available service in the cloud, growing leaders on the team by providing coaching, mentorship and growth opportunities, playing a key role in defining the product and engineering roadmap for the team, partnering with other engineering and product leaders on planning, prioritization and staffing, and creating a culture of excellence on the team while leading with empathy.</p>
<p>Key requirements for this role include 5+ years experience working in database systems, data processing or related domains, experience building highly available cloud services on AWS, GCP or Azure, building, growing and managing high performance teams, experience defining and meeting SLOs for highly available systems, ability to attract and hire engineers who meet the Databricks hiring standards, and comfort working cross-functionality with product management and partners to build products that drive user growth.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000-$261,250 USD</Salaryrange>
      <Skills>database systems, data processing, cloud native database systems, highly available cloud services, AWS, GCP, Azure, team management, leadership development, product roadmap development, cross-functional collaboration, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8472398002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc54ed6c-ca0</externalid>
      <Title>Full-Stack Engineer, Core Services (Senior Level)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Full-Stack Engineer to join our Core Services team. As a senior-level engineer, you&#39;ll design, build, and optimise the core systems and management platforms that power the Instabase platform.</p>
<p>This is a high-impact role for a &#39;product-minded engineer&#39;. In our Core Services team, we treat our platform as a product. Because we operate with a lean team, you will have end-to-end ownership: from writing Product Requirement Documents (PRDs) and building the high-performance backend services and scalable infrastructure that support them.</p>
<p>Responsibilities:</p>
<ul>
<li>Full Stack Development: You will function as a product-minded engineer for our internal platform. This involves architecting secure infrastructure (Kubernetes, Docker) and backend services (Go, Python, PostgresDB), while also building the frontend interfaces (React, TS) to support features.</li>
</ul>
<ul>
<li>Developer Experience: Create the internal platforms and dashboards that improve developer velocity, reliability, and observability across the entire organisation.</li>
</ul>
<ul>
<li>Technical Leadership: Act as a technical leader who mentors junior engineers, contributes to the entire infrastructure codebase, and identifies root causes for critical system issues.</li>
</ul>
<p>About you:</p>
<ul>
<li>Education: BS, MS, or PhD in Computer Science, or equivalent experience in a technical field such as Physics or Mathematics.</li>
</ul>
<ul>
<li>Experience: 5+ years of professional software development experience with a strong foundation in CS fundamentals.</li>
</ul>
<ul>
<li>Backend Expertise: Proficiency in Go and Python, with a deep understanding of building scalable backend services and APIs.</li>
</ul>
<ul>
<li>Frontend Expertise: Strong experience with React, TypeScript, and JavaScript for building complex, data-rich web applications.</li>
</ul>
<ul>
<li>Infrastructure &amp; Orchestration: Proficiency with Docker, Kubernetes, and cloud infrastructure (AWS, GCP, or Azure).</li>
</ul>
<ul>
<li>Product Thinking &amp; UI Design: You are comfortable functioning as your own PM and Designer and write technical specs (PRDs) to define how users interact with infrastructure.</li>
</ul>
<ul>
<li>Communication: Excellent communication skills to represent technical and product decisions to the wider engineering team.</li>
</ul>
<p>Good to have:</p>
<ul>
<li>Experience with React Native for mobile or cross-platform applications.</li>
</ul>
<ul>
<li>Prior experience in a startup environment where you handled multi-functional responsibilities (Dev, PM, and Design).</li>
</ul>
<p>Compensation: The base salary range for this role is $190,000 to $205,000 + bonus, equity and US benefits.</p>
<p>US Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
</ul>
<ul>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
</ul>
<ul>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
</ul>
<ul>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
</ul>
<ul>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
</ul>
<ul>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
</ul>
<ul>
<li>Lunch on Us: Enjoy a lunch credit when you&#39;re in the office.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Instabase is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $205,000 + bonus, equity and US benefits</Salaryrange>
      <Skills>Go, Python, PostgresDB, Kubernetes, Docker, React, TypeScript, JavaScript, Cloud infrastructure (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase provides a platform for organisations to solve unstructured data problems using AI.
It has customers representing large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8186577002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a50895-413</externalid>
      <Title>Senior Technical Solutions Engineer - Platform (Greater China Region)</Title>
      <Description><![CDATA[<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>
<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>
<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>
<p>You will report to the Senior Manager of Technical Solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Troubleshoot and resolve complex customer issues related to Databricks platform</li>
<li>Provide best practices support for custom-built solutions developed by Databricks customers</li>
<li>Deliver suggestions for improving performance in customer-specific environments</li>
<li>Assist with issues around third-party integrations with Databricks environment</li>
<li>Demonstrate and coordinate with engineering and escalation teams to achieve resolution of customer issues and requests</li>
<li>Participate in the creation and maintenance of company documentation and knowledge articles</li>
<li>Be a true proponent of customer advocacy</li>
<li>Strengthen your AWS/Azure and Databricks platform expertise through learning and internal training programs</li>
<li>Participate in weekend and weekday on call rotation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years experience designing, building, testing, and maintaining Python/Java/Scala based applications</li>
<li>Expert level knowledge in python is desired</li>
<li>Strong experience with SQL-based database is required</li>
<li>Linux/Unix administration skills</li>
<li>Hands-on experience with AWS, Azure or GCP</li>
<li>Experience with &quot;Distributed Big Data Computing&quot; environment</li>
<li>Technical degree or the equivalent experience</li>
<li>Proficiency in Mandarin and English is a must as this role serves clients based in the Greater China Region and involves direct customer communications in Mandarin</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, SQL, Linux/Unix administration, AWS, Azure, GCP, Distributed Big Data Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data and AI workloads.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407891002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d50772ab-afe</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p>We are seeking a Staff / Senior Software Engineer to join our Cloud Inference team. The successful candidate will design and build infrastructure that serves Claude across multiple cloud service providers (CSPs), accounting for differences in compute hardware, networking, APIs, and operational models.</p>
<p>The ideal candidate will have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users. They will also have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
</ul>
<ul>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
</ul>
<ul>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
</ul>
<ul>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
</ul>
<ul>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
</ul>
<ul>
<li>Optimise inference cost and performance across providers,designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
</ul>
<ul>
<li>Contribute to inference features that must work consistently across all platforms</li>
</ul>
<ul>
<li>Analyse observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
</ul>
<ul>
<li>Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
</ul>
<ul>
<li>Strong interest in inference</li>
</ul>
<ul>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
</ul>
<ul>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
</ul>
<ul>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
</ul>
<ul>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p>Preferred skills:</p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
</ul>
<ul>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
</ul>
<ul>
<li>Hands-on experience with capacity management, cost optimisation, or resource planning at scale across heterogeneous environments</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation, batching, caching, and serving strategies</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p>Salary Range: $300,000-$485,000 USD</p>
<p>Experience Level: Staff</p>
<p>Employment Type: Full-time</p>
<p>Workplace Type: Hybrid</p>
<p>Category: Engineering</p>
<p>Industry: Technology</p>
<p>Required Skills:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
</ul>
<ul>
<li>Cloud computing (AWS, GCP, Azure)</li>
</ul>
<ul>
<li>Kubernetes</li>
</ul>
<ul>
<li>Infrastructure as Code</li>
</ul>
<ul>
<li>Container orchestration</li>
</ul>
<ul>
<li>Inference</li>
</ul>
<ul>
<li>Cross-functional collaboration</li>
</ul>
<ul>
<li>Autonomy and self-driven</li>
</ul>
<ul>
<li>Platform-agnostic tooling</li>
</ul>
<ul>
<li>Capacity management</li>
</ul>
<ul>
<li>Cost optimisation</li>
</ul>
<ul>
<li>Resource planning</li>
</ul>
<ul>
<li>LLM inference optimisation</li>
</ul>
<ul>
<li>Machine learning infrastructure</li>
</ul>
<ul>
<li>CI/CD systems</li>
</ul>
<ul>
<li>Multi-region deployments</li>
</ul>
<ul>
<li>Geographic routing</li>
</ul>
<ul>
<li>Global traffic management</li>
</ul>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Rust</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Direct experience working with CSP partner teams</li>
</ul>
<ul>
<li>Building platform-agnostic tooling</li>
</ul>
<ul>
<li>Hands-on experience with capacity management</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$485,000 USD</Salaryrange>
      <Skills>high-performance, large-scale distributed systems, cloud computing (AWS, GCP, Azure), kubernetes, infrastructure as code, container orchestration, inference, cross-functional collaboration, autonomy and self-driven, platform-agnostic tooling, capacity management, cost optimisation, resource planning, llm inference optimisation, machine learning infrastructure, ci/cd systems, multi-region deployments, geographic routing, global traffic management, python, rust, direct experience working with csp partner teams, building platform-agnostic tooling, hands-on experience with capacity management, strong familiarity with llm inference optimisation, experience with machine learning infrastructure, background designing and building ci/cd systems, solid understanding of multi-region deployments, proficiency in python or rust</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>31f63648-f2e</externalid>
      <Title>Software Engineer, Fullstack (Support Core)</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced and ambitious Full Stack Software Engineer to join our team in Vancouver, Canada. As a key member of our Product Engineering department, you will take end-to-end ownership of significant features, driving architectural decisions, tackling complex technical hurdles, and ensuring high code quality. Your expertise will enrich the team, including through the active mentorship of junior engineers.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, develop, and deploy high-quality features across Dialpad&#39;s web and desktop-native applications.</li>
<li>Write clean, modular, and maintainable code using best practices along with unit &amp; integration tests.</li>
<li>Participate in code reviews to ensure code quality, maintainability, and scalability.</li>
<li>Ensure that features are shipped on time and with the highest quality.</li>
<li>Participate in a rotating team on-call schedule to quickly diagnose and resolve critical issues, ensuring a seamless customer experience.</li>
<li>Collaborate with cross-functional teams to build and use common components and practices across Dialpad products.</li>
<li>Mentor junior engineers and help them grow their skills and expertise.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of professional experience in Full-Stack Software Engineering.</li>
<li>Strong experience with Python, APIs, Vue/React, HTML, CSS, JavaScript, TypeScript, GraphQL, GCP, or other cloud infrastructures, etc.</li>
<li>Practical experience designing, deploying, and optimizing solutions leveraging serverless computing, microservices, and event-driven architectures.</li>
<li>Proficiency with both SQL and NoSQL databases.</li>
<li>Experience with building reusable and modular components for both frontend and backend.</li>
<li>Experience with mentoring junior engineers and helping them grow their skills.</li>
<li>Experience with RESTful APIs and GraphQL schemas.</li>
<li>Bachelor&#39;s degree in Computer Science or equivalent practical experience.</li>
<li>Proven ability to design, build, launch, and maintain at least two large-scale production systems.</li>
<li>Experience with Agile development methodologies.</li>
<li>Strong debugging and troubleshooting skills.</li>
<li>Strong communication and collaboration skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$147,000-$174,000 CAD</Salaryrange>
      <Skills>Python, APIs, Vue/React, HTML, CSS, JavaScript, TypeScript, GraphQL, GCP, serverless computing, microservices, event-driven architectures, SQL, NoSQL databases, RESTful APIs, GraphQL schemas</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is the AI-native business communications platform, serving over 70,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8407060002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a3f562b-768</externalid>
      <Title>Senior Staff Software Engineer, API</Title>
      <Description><![CDATA[<p>About Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\n\nAbout the role\n\nAnthropic is seeking an exceptional Senior Staff Software Engineer to join the Claude Developer Platform team and serve as the senior-most individual contributor across API Engineering. Since launch, the Claude API has seen rapid growth and adoption by companies of all sizes to build AI applications with our industry-leading models. The API serves as the primary channel for safely and broadly distributing AI&#39;s benefits across all sectors of the economy.\n\nThis role sets the technical direction for the systems that make Claude accessible to developers, enterprises, and partners at scale. You will operate at the intersection of technical strategy and execution, partnering closely with Research, Inference, Platform, Infrastructure, and Safeguards to ensure the Claude API is reliable, capable, and positioned to grow with Anthropic&#39;s ambitions.\n\nResponsibilities\n\n- Define and drive multi-year technical strategy for the Claude API, setting direction across API Core, Capabilities, Knowledge, Distributability, and Agents.\n\n- Identify and personally lead the highest-complexity, highest-impact engineering initiatives spanning multiple teams.\n\n- Serve as the primary technical decision-maker for major architectural decisions with org-wide scope.\n\n- Partner with Research to evaluate and integrate frontier capabilities; work with Inference and Platform for reliable delivery at scale; collaborate with Infrastructure and Safeguards for reliability, security, and responsible deployment.\n\n- Mentor and develop Staff-level engineers across the org.\n\n- Drive alignment across Product, GTM, Safety, and beyond while proactively identifying and addressing systemic technical risks.\n\nYou may be a good fit if you:\n\n- Have 12+ years of engineering experience with a clear track record operating at Staff or Senior Staff level.\n\n- Have demonstrably shaped technical strategy for large-scale API or distributed systems platforms.\n\n- Drive the highest-leverage technical outcomes without formal authority,you lead through influence, quality of thinking, and trust.\n\n- Have deep expertise in distributed systems and API architecture, and are effective writing design docs, making architectural calls, and coding in critical paths.\n\n- Are highly effective across org boundaries,you build trust with Research, Inference, Infrastructure, Safeguards, and business stakeholders alike.\n\n- Bring strong product instincts and a craftsperson&#39;s approach to API design; you communicate clearly with both technical and non-technical audiences.\n\nTechnical Stack\n\n- Languages: Python, TypeScript\n\n- Frameworks: FastAPI, React\n\n- Infrastructure: GCP, Kubernetes, Cloud Run, AWS, Azure\n\n- Databases: PostgreSQL (AlloyDB), Vector Stores, Firestore\n\n- Tools: Feature Flagging, Prometheus, Grafana, Datadog\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nLocation Preference: Preference will be given to candidates based in New York or the San Francisco Bay Area as these positions are part of an SF- or NY-based team.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $405,000-$485,000 USD\n\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, TypeScript, FastAPI, React, GCP, Kubernetes, Cloud Run, AWS, Azure, PostgreSQL, Vector Stores, Firestore, Feature Flagging, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5134895008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9d52173c-23a</externalid>
      <Title>Sr. Software Development Engineer in Test</Title>
      <Description><![CDATA[<p>About Dialpad</p>
<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>
<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>
<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyse conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time.</p>
<p>Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do. Visit dialpad.com to learn more.</p>
<p>Being a Dialer</p>
<p>At Dialpad, AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>
<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>
<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>
<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>
<p>The Role</p>
<p>As a Senior SDET, you are a strong software engineer with deep expertise in test automation and quality engineering. You design and own scalable test frameworks, drive quality strategy for your domain, and proactively identify risk in complex, cloud-native systems.</p>
<p>This position requires a hybrid work arrangement, with three days in the office. The role reports to the Manager of Quality Assurance, who is located in Bangalore.</p>
<p>This role is hands-on, technical, and impact-driven, with clear expectations around ownership, influence, and mentorship.</p>
<p>What You’ll Do</p>
<ul>
<li>Design, develop, and maintain scalable automated testing frameworks for APIs, microservices, and web integrations.</li>
</ul>
<ul>
<li>Perform deep-dive testing of the Connect platform, with a strong focus on asynchronous workflows, data consistency, resiliency, and latency.</li>
</ul>
<ul>
<li>Own and evolve quality gates within the CI/CD pipeline, ensuring fast, actionable feedback on every pull request.</li>
</ul>
<ul>
<li>Plan and execute functional, regression, and end-to-end test coverage across UI, API, and database layers.</li>
</ul>
<ul>
<li>Build internal tools and utilities to help reproduce, debug, and isolate complex production issues.</li>
</ul>
<ul>
<li>Set up, execute, and continuously improve automated test suites; derive meaningful quality KPIs and clearly communicate results.</li>
</ul>
<ul>
<li>Provide detailed failure analysis to enable rapid diagnosis and resolution of product or test defects.</li>
</ul>
<ul>
<li>Design and execute load, stress, and performance tests across services and critical user workflows.</li>
</ul>
<ul>
<li>Participate actively in architecture and design reviews, advocating for testability and appropriate test hooks.</li>
</ul>
<ul>
<li>Define and execute comprehensive test strategies aligned with product and platform goals.</li>
</ul>
<ul>
<li>Write clean, reliable, and maintainable code, solving complex problems with scalable solutions.</li>
</ul>
<ul>
<li>Mentor junior and mid-level engineers, while staying current on modern testing and software development best practices.</li>
</ul>
<p>What You’ll Bring</p>
<ul>
<li>6+ years of professional software development experience, with strong emphasis on test automation for large-scale systems.</li>
</ul>
<ul>
<li>Strong coding skills in Python, Java, or JavaScript.</li>
</ul>
<ul>
<li>Proven experience designing test frameworks, test strategies, and reviewing system designs.</li>
</ul>
<ul>
<li>Solid understanding of testing methodologies: regression, integration, end-to-end, load, and performance testing.</li>
</ul>
<ul>
<li>Hands-on experience with API and integration testing; strong knowledge of RESTful services.</li>
</ul>
<ul>
<li>Experience working in cloud-native, distributed environments.</li>
</ul>
<ul>
<li>Experience building and maintaining CI/CD pipelines using Jenkins and GitHub.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills; comfortable collaborating across teams and geographies.</li>
</ul>
<ul>
<li>Demonstrated ownership, proactive problem-solving, and ability to operate independently.</li>
</ul>
<p>Technologies Utilised</p>
<ul>
<li>GCP: App Engine (GAE), Kubernetes (GKE), Compute Engine (GCE)</li>
</ul>
<ul>
<li>Languages &amp; Tools: Python, Vue, Git, GitHub, Jira</li>
</ul>
<p>Why Join Dialpad</p>
<ul>
<li>Work at the center of the AI transformation in business communications</li>
</ul>
<ul>
<li>Build and ship agentic AI products that are redefining how companies operate</li>
</ul>
<ul>
<li>Join a team where AI amplifies every employee’s impact</li>
</ul>
<ul>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>
<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>
<p>Our exceptional culture, repeatedly recognised as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>
<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>
<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, JavaScript, Test automation, Quality engineering, Cloud-native systems, APIs, Microservices, Web integrations, CI/CD pipelines, Jenkins, GitHub, RESTful services, GCP, App Engine, Kubernetes, Compute Engine</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8407069002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61460f7d-087</externalid>
      <Title>Associate Solutions Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>The Cloudflare Associate Solution Engineering Program is a 12-month rotational experience designed to launch your career in pre-sales engineering. You&#39;ll combine technical depth, customer problem-solving, and business acumen to make Cloudflare&#39;s technology accessible and valuable for customers across Asia-Pacific.</p>
<p>Responsibilities</p>
<ul>
<li>Shadow customer calls and technical deep-dives with Enterprise and Strategic accounts</li>
<li>Build and deliver product demonstrations tailored to customer use cases (web security, performance, serverless computing)</li>
<li>Participate in workshops on Cloudflare technologies: Workers, Zero Trust, DNS, DDoS mitigation, WAF</li>
<li>Collaborate with Sales, Product, and Engineering teams to solve customer technical questions</li>
<li>Document customer requirements and translate them into solution architectures</li>
<li>Rotate between GCR, ANZ, and ASEAN customer teams every 4 months</li>
<li>Contribute to internal tooling, demo environments, or solution accelerators</li>
</ul>
<p>Requirements</p>
<ul>
<li>Have graduated within the past 2 years (or have equivalent demonstrated technical experience through boot camps, self-study, or professional work)</li>
<li>Can explain core networking concepts (e.g., how DNS resolution works, what happens when you visit a URL, difference between TCP/UDP)</li>
<li>Are available to start in July 2026 and commit to 12 months including regional rotations</li>
<li>Communicate fluently in English (written and verbal)</li>
<li>Can manage multiple concurrent projects with competing deadlines</li>
<li>Are authorized to work without sponsorship</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Internship or project experience in a customer-facing, consulting, or technical sales environment</li>
<li>Proficiency in Mandarin, Cantonese, or Bahasa Indonesia (for serving regional customers)</li>
<li>Scripting skills in Python, JavaScript, Bash, or similar</li>
<li>Hands-on experience with web technologies: HTML/CSS/JS, HTTP APIs, or cloud platforms (AWS/GCP/Azure)</li>
<li>Demonstrated ownership of technical projects (GitHub portfolio, conference talks, open-source contributions)</li>
</ul>
<p>Technologies you&#39;ll work with:</p>
<ul>
<li>Cloudflare&#39;s edge network</li>
<li>Workers (serverless)</li>
<li>Zero Trust security</li>
<li>DNS/CDN</li>
<li>DDoS mitigation</li>
<li>WAF</li>
<li>API Gateway</li>
<li>R2 storage</li>
<li>Stream</li>
<li>Images</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloudflare&apos;s edge network, Workers (serverless), Zero Trust security, DNS/CDN, DDoS mitigation, WAF, API Gateway, R2 storage, Stream, Images, Python, JavaScript, Bash, HTML/CSS/JS, HTTP APIs, cloud platforms (AWS/GCP/Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7817971</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d1820c6-271</externalid>
      <Title>Senior Software Engineer - Data Platform</Title>
      <Description><![CDATA[<p>At Pave, we&#39;re building the industry&#39;s leading compensation platform. Our platform is perfecting the art and science of pay to give 8,500+ companies unparalleled confidence in every compensation decision.</p>
<p>Top tier companies like OpenAI, McDonald’s, Instacart, Atlassian, Synopsys, Stripe, Databricks, and Waymo use Pave, transforming every pay decision into a competitive advantage. $190+ billion in total compensation spend is managed in our workflows, and 70% of Forbes AI 50 use Pave to benchmark compensation.</p>
<p>The future of pay is real-time &amp; predictive, and we’re making it happen right now. We’ve raised $160M in funding from leading investors like Andreessen Horowitz, Index Ventures, Y Combinator, Bessemer Venture Partners, and Craft Ventures.</p>
<p><strong>The Data Platform Team @ Pave</strong></p>
<p>The Data Platform team owns the data infrastructure that makes Pave work , connecting Pave&#39;s platform to the HRIS, EMS, ATS, and partner systems that power our customers&#39; compensation workflows. Without reliable, high-quality integrations, none of Pave&#39;s products can deliver their full value. This is a high-leverage role at the core of Pave&#39;s platform strategy. This is also a rare opportunity to join as a founding engineer on the team - meaning you won&#39;t just be executing on a defined roadmap, but actively shaping how the platform evolves, what gets prioritized, and what engineering culture looks like from day one.</p>
<p>As a Senior Software Engineer on the Data Platform team, you will design and build the connectors, pipelines, and platform capabilities that enable Pave to serve enterprise customers at scale. You&#39;ll have significant influence over the architecture and reliability of a system that touches every Pave customer.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build new HRIS, EMS, ATS, and partner integrations, with a focus on enterprise-grade systems like Workday, BambooHR, and ADP</li>
</ul>
<ul>
<li>Own integrations end-to-end , from connector design and data mapping through testing, rollout, and ongoing reliability</li>
</ul>
<ul>
<li>Build and improve observability tooling , monitoring, alerting, and debugging infrastructure that gives the team and customers confidence in pipeline health</li>
</ul>
<ul>
<li>Partner closely with product and go-to-market to prioritize connector coverage and unblock key deals</li>
</ul>
<ul>
<li>Identify and address systemic reliability and scalability issues in Pave’s data platform</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience building and shipping production-grade software, with meaningful exposure to integrations, data pipelines, or API platforms</li>
</ul>
<ul>
<li>Strong understanding of integration patterns, API design, and the operational challenges of third-party data dependencies</li>
</ul>
<ul>
<li>Experience working on enterprise B2B products and an understanding of the reliability and data quality standards they require</li>
</ul>
<ul>
<li>Ability to lead technical projects with ambiguity , third-party APIs are rarely well-documented and requirements shift</li>
</ul>
<ul>
<li>Execution oriented , eager to not only design solutions but dive into implementation and see them through to a reliable, monitored production state</li>
</ul>
<ul>
<li>Experience with our tech stack: TypeScript, Node.js, React, GCP</li>
</ul>
<p><strong>Compensation</strong></p>
<p>At Pave, we believe compensation should be as thoughtful as the people we hire. This role is structured as a B2B contract engagement , and your total rewards package includes meaningful equity, flexible PTO, and region-specific benefits designed around your life, not just your role. Your level and compensation are determined by your experience and how you show up throughout the interview process. We&#39;re always happy to walk you through how we think about leveling , just ask.</p>
<p><strong>Benefits</strong></p>
<p>At Pave, growth isn&#39;t a perk , it&#39;s the point. As you develop, your role expands, your responsibilities deepen, and your compensation reflects the impact you&#39;re making.</p>
<p>What we offer:</p>
<ul>
<li>Private Health Coverage: Comprehensive private healthcare to keep you and your wellbeing a priority , because great work starts with taking care of yourself.</li>
</ul>
<ul>
<li>Sports &amp; Wellness: A monthly wellness stipend to fuel your active lifestyle , whether that&#39;s a MultiSport membership or whatever keeps you moving.</li>
</ul>
<ul>
<li>Life &amp; Disability Protection: Life, AD&amp;D, and permanent disability coverage so you and your family are protected when it matters most.</li>
</ul>
<ul>
<li>Retirement Program: Monthly retirement contributions to help you build the kind of long-term financial security that lets you focus on the work in front of you.</li>
</ul>
<ul>
<li>Time That&#39;s Actually Yours: Flexible PTO to rest, recharge, and show up as your best self , no counting days, no guilt.</li>
</ul>
<ul>
<li>Work From Anywhere: The freedom to work from anywhere in the world for a full month each year, because life doesn&#39;t pause and neither should you.</li>
</ul>
<ul>
<li>Room to Keep Growing: A quarterly education stipend to invest in the skills and knowledge that matter most to you.</li>
</ul>
<ul>
<li>Equipment: Top-of-the-line Pave hardware waiting for you on day one , no setup anxiety, just hit the ground running.</li>
</ul>
<p><strong>Life @ Pave</strong></p>
<p>Founded in 2019 with a clear purpose and a team that has never wavered from it, Pave has grown into a global force in compensation management , giving thousands of companies the tools to take control, build confidence, and earn credibility in every pay decision they make. And we&#39;re just getting started. Headquartered in San Francisco&#39;s Financial District, with regional hubs in New York City&#39;s Flatiron District, Salt Lake City, Kraków (Poland), and the United Kingdom , wherever you&#39;re based, you&#39;ll find the same thing: people who genuinely care about the work, each other, and the customers that rely on Pave.</p>
<p>We run a hybrid culture that brings teams together in person on Monday, Tuesday, Thursday, and Friday , and every Friday, the whole company gathers for our Team Sync: breakfast, new hire welcomes, product updates, fireside chats, and yes, the occasional Kahoot. It&#39;s one of the things people notice when they join us , that we truly enjoy spending time together.</p>
<p>Our culture is shaped by five values we live every day:</p>
<ul>
<li>Be Intellectually Honest , Truth over comfort. We face reality clearly and speak directly, even when it&#39;s hard.</li>
</ul>
<ul>
<li>Play to Win , We&#39;re not here to participate. We&#39;re here to be the #1 compensation platform in the world, and we act like it.</li>
</ul>
<ul>
<li>Uphold the Pave Platinum Standard , We hold ourselves to the highest bar , for our customers, our data, and each other.</li>
</ul>
<ul>
<li>One Team , We win and lose together. Titles don&#39;t drive decisions here , shared goals do.</li>
</ul>
<ul>
<li>Hug of Jawn , Hard to define, impossible to miss. Ask your recruiter.</li>
</ul>
<p>Our Vision: Unlock a labor market built on trust.</p>
<p>Our Mission: Build confidence in every compensation decision.</p>
<p>We build software that transforms how companies pay their people , and we believe the team behind that software deserves the same thoughtfulness. If you&#39;re ready to help shape the future of compensation alongside people who are smart, humble, and genuinely motivated by the problem we&#39;re solving, we&#39;d love to meet you.</p>
<p>Still deliberating? Just apply! We&#39;re always excited to meet people who are eager to contribute.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, Node.js, React, GCP, Integration patterns, API design, Third-party data dependencies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pave</Employername>
      <Employerlogo>https://logos.yubhub.co/pave.com.png</Employerlogo>
      <Employerdescription>Pave is a compensation platform that combines real-time compensation data with AI and machine learning to help companies make informed pay decisions.</Employerdescription>
      <Employerwebsite>https://pave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/paveakatroveinformationtechnologies/jobs/4677919005</Applyto>
      <Location>Krakow, Poland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fde2d89-11c</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Econometrics, Statistics, Quantitative social science research, Large language models, AI systems, ML research workflows, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Problem-solving skills, Collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>489d4d8c-49e</externalid>
      <Title>Solutions Architect, AI/Cloudflare Developer Platform</Title>
      <Description><![CDATA[<p>As a Solutions Architect, Cloudflare AI / Developer Platform and a member of the sales team, you will help customers understand the value proposition of the Cloudflare Developer Platform and demonstrate how to effectively build applications with our products.</p>
<p>Every day as a Solution Architect is different. You will utilize both technical and business skills to advise customers and sales teams, support strategic opportunities, architect innovative solutions, and develop proofs of concept / demonstrations.</p>
<p>Your technical knowledge of Cloudflare&#39;s products and system design will be vital to designing solutions that meet our customers&#39; needs and expectations. Serving as a trusted technical advisor, Solution Architects guide and enable clients, partners, and teams within Cloudflare on product capabilities, positioning and competitive intelligence.</p>
<p>You will form a tight feedback loop with product, product marketing, and technical pre-sales to refine and evolve our products.</p>
<p>The ideal candidate possesses a consultative mindset, demonstrable success working with customers, and deep, practical knowledge of modern web technologies, cloud architecture, and experience building on a distributed serverless platform.</p>
<p>No matter your background, you have natural curiosity and desire to solve problems, achieve goals, and design the most elegant and efficient solutions to address client needs.</p>
<p>A successful Solution Architect at Cloudflare is able to act as a trusted advisor for our customers, while balancing the technical and business needs of the role – actively building and regularly presenting technical solutions to varied audiences.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with the sales organization to drive revenue, new customers and pipeline of AI and Developer Platform solutions.</li>
</ul>
<ul>
<li>Lead technical discovery with customers and jointly architect best practice solutions to meet customer needs.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams including product management, sales, and marketing to drive developer platform revenue and customer adoption.</li>
</ul>
<ul>
<li>Present to strategic customers as an expert of our Developer Platform solutions.</li>
</ul>
<ul>
<li>Align Director and C-Level perceived business and technical value with Cloudflare developer solutions.</li>
</ul>
<ul>
<li>Provide succinct feedback to cross-functional teams to deliver relevant Developer content, use cases, customer stories, and data driven value propositions.</li>
</ul>
<p>Skill Requirements:</p>
<ul>
<li>5+ years of experience selling or supporting technical sales in the cloud computing industry.</li>
</ul>
<ul>
<li>Deep technical expertise across cloud infrastructure and AI/ML , you have built production systems that combine both as a solutions engineer, entrepreneur, or solution architect.</li>
</ul>
<ul>
<li>In-depth knowledge of at least one major public cloud provider (e.g., AWS, GCP, Azure).</li>
</ul>
<ul>
<li>Practical knowledge and experience designing systems. You have built and deployed a production web application either professionally or as a hobbyist and are able to clearly articulate the design and explain the considerations / trade-offs.</li>
</ul>
<ul>
<li>Software development experience delivering full-stack applications, preferably using modern JavaScript frameworks, a variety of databases, and Serverless tooling.</li>
</ul>
<ul>
<li>Strong understanding of developer workflows (branching, versioning, CI/CD practices, system integrations).</li>
</ul>
<ul>
<li>Knowledge of key market players/competitors in the cloud computing, AI and storage spaces.</li>
</ul>
<p>Other desirable skills areas include:</p>
<ul>
<li>You’ve built something on Cloudflare Workers.</li>
</ul>
<ul>
<li>AWS Solutions Architect or GCP Cloud Architect certifications</li>
</ul>
<ul>
<li>Providing structured customer feedback to influence product direction.</li>
</ul>
<ul>
<li>Actively stay up-to-date with industry trends and advancements in cloud computing to inform product strategy and roadmap.</li>
</ul>
<p>Compensation:</p>
<p>This role is eligible to earn incentive compensation under Cloudflare’s Sales Compensation Plan. The estimated annual salary range includes the on-target incentive compensation that may be attained in this role under the Sales Compensation Plan.</p>
<p>For Bay Area based hires: Estimated annual salary of $212,000.00 - $292,000.00</p>
<p>Equity:</p>
<p>This role is eligible to participate in Cloudflare’s equity plan.</p>
<p>Benefits:</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits:</p>
<ul>
<li>Medical/Rx Insurance</li>
</ul>
<ul>
<li>Dental Insurance</li>
</ul>
<ul>
<li>Vision Insurance</li>
</ul>
<ul>
<li>Flexible Spending Accounts</li>
</ul>
<ul>
<li>Commuter Spending Accounts</li>
</ul>
<ul>
<li>Fertility &amp; Family Forming Benefits</li>
</ul>
<ul>
<li>On-demand mental health support and Employee Assistance Program</li>
</ul>
<ul>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits:</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
</ul>
<ul>
<li>Life &amp; Accident Insurance</li>
</ul>
<ul>
<li>401(k) Retirement Savings Plan</li>
</ul>
<ul>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off:</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
</ul>
<ul>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo:</p>
<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project:</p>
<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud infrastructure, AI/ML, public cloud provider, system design, modern web technologies, cloud architecture, distributed serverless platform, developer workflows, developer content, customer stories, data driven value propositions, Cloudflare Workers, AWS Solutions Architect, GCP Cloud Architect, structured customer feedback, industry trends, advancements in cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7505582</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22375926-26e</externalid>
      <Title>Senior IT Systems Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic thinker and proven problem-solver with deep expertise in modern IT ecosystems. As a Sr. IT Systems Engineer, you&#39;ll drive automation, mature enterprise workforce identity and access management (IAM), and architect scalable, secure SaaS integrations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead the design, implementation, administration, and optimization of core SaaS platforms including Okta, Google Workspace, Slack, Atlassian, and other IT tools.</li>
<li>Own end-to-end support, monitoring, troubleshooting, and performance tuning of applications, systems, and their complex interconnections,ensuring high availability, security, and seamless user experience.</li>
<li>Help architect and advance our workforce Identity and Access Management program, including configuration of Single Sign-On (SSO), lifecycle management, provisioning/deprovisioning, access governance, and policy enforcement.</li>
<li>Serve as the subject matter expert (SME) providing strategic technical guidance to support business expansion, system scalability, and infrastructure maturity.</li>
<li>Drive cross-functional knowledge sharing by authoring, maintaining, and evolving comprehensive IT documentation, runbooks, and architecture diagrams.</li>
<li>Proactively identify gaps, risks, and opportunities in the environment; lead initiatives to enhance security posture, operational efficiency, and resilience,prioritizing automation of manual/repetitive processes.</li>
<li>Evaluate emerging technologies, IAM trends, and automation platforms; develop business cases and lead proof-of-concepts or adoption recommendations.</li>
<li>Mentor junior engineers and collaborate with cross-functional teams to align IT capabilities with organisational goals.</li>
</ul>
<p><strong>Basic Qualifications:</strong></p>
<ul>
<li>8+ years of hands-on experience administering and optimising a broad portfolio of SaaS applications in a hybrid and high-growth environment,with advanced proficiency in our core stack: Okta (including Advanced Server Access &amp; Workflows), Google Workspace, Slack Enterprise, Atlassian, etc.</li>
<li>4+ years of deep experience with n8n, Okta Workflows and/or other leading iPaaS/automation platforms (e.g., Workato, Zapier, BetterCloud, custom integrations).</li>
<li>Expert-level knowledge of IAM principles and protocols: SSO, SAML, OIDC, OAuth 2.0, SCIM, JIT provisioning, SWA, RBAC, ABAC, and access governance best practices.</li>
<li>Strong experience designing and working with APIs for custom integrations, data flows, and automation.</li>
<li>Proficiency in scripting and automation for monitoring, alerting, and operational efficiency (e.g., Google Apps Manager (GAM), Python, Bash, PowerShell, Terraform, or similar); experience building custom solutions is highly valued.</li>
<li>Solid working knowledge and administrative experience in Azure, AWS, and/or GCP cloud platforms.</li>
<li>Exceptional analytical and troubleshooting skills with a proven track record of resolving sophisticated, cross-system incidents under pressure.</li>
<li>Demonstrated ability to deliver measurable business impact, own key deliverables, and drive projects to completion in fast-paced environments with competing priorities.</li>
<li>Comfortable adapting to dynamic requirements, handling time-sensitive escalations, and participating in on-call rotation.</li>
<li>Track record of success as a Senior IT Systems Engineer or equivalent in a fast-moving corporate or tech environment.</li>
<li>Okta certifications (e.g., Okta Certified Professional / Administrator / Consultant) strongly preferred; other relevant certifications (Google Workspace) are a plus.</li>
<li>Bachelor’s degree in Information Technology, Computer Science, or a related field preferred (or equivalent demonstrated experience) is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$184,000 - $276,000 USD</Salaryrange>
      <Skills>Okta, Google Workspace, Slack, Atlassian, n8n, Okta Workflows, iPaaS/automation platforms, IAM principles and protocols, APIs for custom integrations, data flows, automation, scripting and automation, monitoring, alerting, operational efficiency, Azure, AWS, GCP cloud platforms, analytical and troubleshooting skills</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5071895007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b10d521-d50</externalid>
      <Title>Senior Software Engineer, Infrastructure</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our Network Infrastructure team. As a member of this team, you will be working with talented engineers on cutting-edge technologies of cloud-native network stack from Layer 3 to Layer 7. You will contribute to key infrastructure components that connect all Airbnb users and services across the globe.</p>
<p>You will have the chance to define and influence large infrastructure initiatives such as global traffic load balancing and disaster recovery, next-gen service mesh, cross-region gateways, and edge security. Airbnb is a member of the Cloud Native Computing Foundation (CNCF) end-user community, and we work closely with the open-source community (e.g., k8s, istio) and peer companies to tackle cloud-native engineering challenges at scale.</p>
<p>In this role, you will:</p>
<ul>
<li>Work with open-source communities (e.g., istio) to build the next-generation service mesh for all Airbnb back-end services;</li>
<li>Build cross-region gateways and load balancers for global Airbnb services;</li>
<li>Work with external partners and internal engineering and security teams to deliver edge security systems that protect Airbnb services;</li>
<li>Design the multi-region network architecture on public clouds and build software and operation tools to manage Airbnb&#39;s production network;</li>
<li>Work with product and engineering teams to optimize the network performance for Airbnb services;</li>
</ul>
<p>You will be a full-cycle developer with strong ownership and experience building and operating high-scale, distributed systems across the full software life cycle. You will have excellent communication skills and the ability to work well within a team and with teams across the engineering teams.</p>
<p>You will be passionate about efficiency, availability, technical quality, and system quality. You will have led a team that is on-call for production infrastructure before.</p>
<p>If you are passionate about building scalable and reliable systems, and you want to make an impact on the industry and open-source communities, then we want to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Virtual network architecture on public cloud providers (e.g., AWS, GCP, Azure), Network service offerings (e.g., VPC, Security Group, PrivateLink and related products.), Large-scale networking systems and software (e.g., Edge proxies, DNS, CDN, network gateways), Istio, Envoy, Full-cycle development, Communication skills, Team leadership, Cloud-native engineering, Open-source community, Peer companies, Cloud Native Computing Foundation (CNCF)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a company that allows users to book unique stays and experiences in almost every country across the globe. It has grown to over 5 million hosts who have welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7391864</Applyto>
      <Location>Remote - Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62900fcd-562</externalid>
      <Title>Security Engineer - Offensive Security</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>
<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>
<p>Responsibilities:</p>
<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>
<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>
<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>
<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>
<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>
<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>
<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>
<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>
<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>
<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>
<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>
<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>
<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>
<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&amp;CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses. It has a large user base, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7820898</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba0a936c-9b5</externalid>
      <Title>Partner Solution Architect (pre-sales)</Title>
      <Description><![CDATA[<p>We are looking for a Partner Solutions Architect to lead technical strategy and enablement for our ecosystem in the ANZ region. This is a hands-on builder role. You will be responsible for ensuring our partners are not only articulating Elastic&#39;s value but are technically capable of architecting, building, and validating complex solutions.</p>
<p>As a Partner Solutions Architect, you will:</p>
<ul>
<li>Own Technical Engagement Plans (TEPs) for focus partners, establishing long-term technical roadmaps at the CTO and Practice Lead level.</li>
<li>Guide partners through high-stakes Technical Validation cycles, ensuring Elastic solutions are built to best-practice standards.</li>
<li>Lead &#39;one-to-many&#39; technical &#39;Build-a-thons&#39; and hands-on laboratory sessions that empower partner engineers to lead their own implementations.</li>
<li>Build deep relationships with partner pre-sales teams to guide them through the &#39;how-to&#39; of complex Search AI, Observability, and Security architectures at the configuration level.</li>
<li>Collaborate on &#39;design wins&#39; by developing repeatable technical blueprints.</li>
</ul>
<p>To be successful in this role, you will require:</p>
<ul>
<li>Direct, hands-on experience with the Elastic Stack (ELK) or similar distributed search/analytics technologies (e.g., OpenSearch, Solr, Splunk, Datadog).</li>
<li>8+ years of experience in technical roles.</li>
<li>Proven ability to design and build technical prototypes, ingest complex datasets, and optimize search/indexing performance.</li>
<li>Hands-on experience with Kubernetes, Docker, and Infrastructure as Code (Terraform) on AWS, Azure, or GCP.</li>
<li>3+ years in a partner-facing role, with a focus on building technical practices and enabling third-party engineering teams.</li>
<li>The ability to translate deep technical capabilities into scalable partner-led solutions.</li>
</ul>
<p>If you are a motivated and experienced professional with a passion for technology and partnership development, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Elastic Stack (ELK), OpenSearch, Solr, Splunk, Datadog, Kubernetes, Docker, Infrastructure as Code (Terraform), AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Their platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7757097</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f329c77b-25d</externalid>
      <Title>Senior Solutions Engineer- West Coast</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Engineer to join our team on the West Coast. As a Senior Solutions Engineer, you will be responsible for collaborating with account executives to develop and execute territory and account strategies to maximise the Okta opportunity in those accounts.</p>
<p>Your duties will include conducting research and discovery to understand customer requirements and communicating the business value of solving technology problems using cloud technology. You will also be responsible for executing the delivery of POCs for customers with complex use cases, collaborating with other Okta engineering teams as needed.</p>
<p>To be successful in this role, you will need to have a strong understanding of Identity &amp; Access Management (IAM) and experience with cloud platforms such as AWS, Azure, and GCP. You will also need to be an elite communicator and able to identify, map, and manage multiple personas.</p>
<p>This is a hybrid role, requiring in-person onboarding and travel to an office in the U.S. during the first week of employment. The OTE range for this position is between $192,000-$323,000 USD, depending on location and experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$192,000-$323,000 USD</Salaryrange>
      <Skills>pre-sales engineering experience, Identity &amp; Access Management (IAM), cloud platforms (AWS, Azure, GCP), elite communication skills, ability to identify, map, and manage multiple personas, web development (JavaScript, HTML, frontend frameworks), mobile development (iOS, Android), backend development (Java, C#, Node.js, Python, PHP, Ruby)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7592210</Applyto>
      <Location>Arizona</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44ca68dc-996</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer - Fullstack to join our team. As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>
<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>
<p>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources. Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs. Seamless onboarding and management for all members of an organisation to become data-driven. Provide a great SQL-centric data exploration and dashboarding experience on Databricks. An interactive environment for collaborative data projects at massive scale with an easy path to production.</p>
<p>What we look for:</p>
<p>5+ years of experience with HTML, CSS, and JavaScript. Passion for user experience and design and a deep understanding of front-end architecture. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value. Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember). 5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go). Good knowledge of SQL. Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes. Experience developing large-scale distributed systems.</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$157,700-$213,800 USD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best Data Intelligence Platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544403002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e5a3deb2-908</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Engineer, Inference</p>
<p>About the Role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Responsibilities:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Willingness to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Note: The salary range for this role is €235,000-€295,000 EUR per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR per year</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eda5b2b8-a68</externalid>
      <Title>Senior Solutions Architect - AI/BI</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect - AI/BI to join our Field Engineering team in London. The successful candidate will be responsible for executing on Databricks&#39; strategic Product Operating Model, providing enhanced focus on earlier stage, highly prioritized product lines to establish product market fit and set the course for rapid revenue growth.</p>
<p>As a Senior Solutions Architect - AI/BI, you will work in partnership with direct account teams to jointly engage clients, foster necessary relationships, position in-depth the specific product line, and provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>
<li>Serving as a trusted advisor, expert Solutions Architect, and champion, building technical credibility with stakeholders to drive product adoption and vision.</li>
<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>
<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
</ul>
<p>If you are a motivated and experienced professional with a passion for data and AI, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions, Ability to advise customers in lakehouse analytics architecture, Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers, Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform used by over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407183002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>85f1f87e-70f</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461327002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>