{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-pipeline"},"x-facet":{"type":"skill","slug":"data-pipeline","display":"Data Pipeline","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6690b2fa-cab"},"title":"(Senior) Team Lead Data Analytics (all genders)","description":"<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>\n<p>This role is on-site in Munich, with two office days per week.</p>\n<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>\n<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>\n<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>\n<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>\n<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>\n<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>\n<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>\n<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>\n<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>\n<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>\n<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>\n<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>\n<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>\n<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6690b2fa-cab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2598226","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Database: AWS Stack (Redshift, Athena, Glue, S3)","Data Pipelines: Airflow, dbt","Data Visualisation: Looker","Data Analytics: SQL, Python","Collaboration: Git, Jira, Confluence, Slack"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:28.264Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Technology","industry":"Travel Technology","skills":"Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7e58f60-5fa"},"title":"Software Engineer - Learning Engineering and Data (LEaD) Program","description":"<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>\n<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>\n<p>Candidate is expected to:</p>\n<ul>\n<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>\n<li>Take part in the development and enhancement of the backend distributed system</li>\n<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>2-5 years of experience working with C++, Python, or Java</li>\n<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>\n<li>Must be comfortable working in both Unix/Linux and Windows environments</li>\n<li>Good understanding of various design patterns</li>\n<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>\n<li>Solid communication skills</li>\n<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>\n<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n</ul>\n<p>Desirable Skills/Knowledge:</p>\n<ul>\n<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>\n<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>\n<li>Hands-on experience building ML and data pipeline architectures</li>\n<li>Understanding of distributed messaging systems</li>\n<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>\n<li>Experience with relational and non-relational database platforms</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7e58f60-5fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT LEad Program","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953879362","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","Python","Java","ML libraries","Pandas","NumPy","FastAPI","Boost","Spring Boot"],"x-skills-preferred":["Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field","Demonstrable passion for developing LLM-powered products","Hands-on experience building ML and data pipeline architectures","Understanding of distributed messaging systems","Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)","Experience with relational and non-relational database platforms"],"datePosted":"2026-04-18T22:13:11.242Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1a20521b-6ce"},"title":"Senior Execution Quantitative Analyst - Fixed Income","description":"<p>We are seeking a Senior Execution Quantitative Analyst to lead the expansion of our central execution capabilities into fixed income markets, covering corporate credit (IG/HY), Treasuries (cash and futures), and interest rate swaps.</p>\n<p>This is a hands-on role requiring deep fixed income market structure knowledge combined with strong quantitative and software development skills. The successful candidate will be expected to assess the firm&#39;s existing data and workflow landscape, identify and size near-term P&amp;L opportunities, and lead the build-out of execution and analysis infrastructure.</p>\n<p><strong>Principal Responsibilities</strong></p>\n<ul>\n<li>Assess the firm&#39;s existing fixed income data assets (dealer axes, evaluated pricing, TRACE prints, swap SDR data, futures market data) and design a coherent real-time and historical data layer to support execution and analysis</li>\n<li>Identify and size near-term opportunities in execution quality improvement, transaction cost reduction, and flow internalization across credit, rates, and swaps</li>\n<li>Design, build, and operate internal execution algorithms covering the full fixed income liquidity spectrum, from liquid on-the-run Treasuries to illiquid corporate bonds,using RFQ, click-to-trade, and direct connectivity workflows</li>\n<li>Build transaction cost analysis and pre-trade cost models for fixed income instruments;</li>\n<li>Partner with portfolio managers and traders to understand flow characteristics and communicate execution analytics clearly</li>\n<li>Recruit and mentor junior quants and engineers as the platform scales</li>\n</ul>\n<p><strong>Qualifications / Skills Required</strong></p>\n<ul>\n<li>10+ years of relevant experience in fixed income electronic trading, execution, or quantitative research on the buy side or sell side</li>\n<li>Hands-on experience building execution infrastructure for institutional fixed income: RFQ and/or click-to-trade workflows, FIX protocol connectivity, and integration with major electronic venues</li>\n<li>Experience building TCA or cost models for fixed income instruments, including illiquid and sparsely traded securities</li>\n<li>Strong programming skills; experience with data pipelines and market data APIs</li>\n<li>Solid quantitative background; degree in Mathematics, Computer Science, Engineering, Physics, or a related field</li>\n<li>Demonstrated ability to translate data analysis into actionable P&amp;L estimates and communicate findings to non-technical stakeholders</li>\n<li>Experience as a hands-on development lead, with a track record of taking projects from inception to production</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1a20521b-6ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Trading Solutions","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954333818","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["fixed income electronic trading","execution","quantitative research","RFQ and/or click-to-trade workflows","FIX protocol connectivity","integration with major electronic venues","TCA or cost models for fixed income instruments","data pipelines","market data APIs","quantitative background","degree in Mathematics, Computer Science, Engineering, Physics, or a related field"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:00.980Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"fixed income electronic trading, execution, quantitative research, RFQ and/or click-to-trade workflows, FIX protocol connectivity, integration with major electronic venues, TCA or cost models for fixed income instruments, data pipelines, market data APIs, quantitative background, degree in Mathematics, Computer Science, Engineering, Physics, or a related field"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9ca997fb-218"},"title":"Quantitative Developer","description":"<p>We are building a world-class systematic data platform that will power the next generation of our systematic portfolio engines.</p>\n<p>The systematic data group is looking for a Quantitative Developer to join our growing team. The team consists of content specialists, data scientists, engineers, and quant developers who are responsible for discovering, maintaining, and analysing sources of alpha for our portfolio managers.</p>\n<p>The role builds on individual&#39;s knowledge and skills in four key areas of quantitative investing: data, statistics, technology, and financial markets.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Use finance knowledge and statistical knowledge to analyse potential alpha sources and present findings to portfolio managers and quantitative analysts.</li>\n<li>Build quant tools to help portfolio managers research, evaluate, combine alphas, and understand risks.</li>\n<li>Design and maintain tools to evaluate and monitor data quality and integrity for a wide variety of data sources.</li>\n<li>Engage with vendors, brokers, and perform analytics to understand characteristics of datasets.</li>\n<li>Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximise their profitability.</li>\n</ul>\n<p>Skills Required:</p>\n<ul>\n<li>3+ years of work experience as a financial engineer, data scientist, or quant developer.</li>\n<li>Strong knowledge of Python and/or C++, Java, C#.</li>\n<li>Familiarity with data pipeline engineering, ETL for large datasets, and scheduling tools like Airflow.</li>\n<li>Strong SQL and database experience including PL-SQL or T-SQL.</li>\n<li>Understanding of typical software development lifecycle and familiarity with: Linux, GitHub, CI/CD.</li>\n<li>Ph.D. or Masters in computer science, mathematics, statistics, or other field requiring quantitative analysis.</li>\n</ul>\n<p>Beneficial Skills and Experience:</p>\n<ul>\n<li>Understanding of risk models and performance attribution.</li>\n<li>Experience with financial markets such as equities and futures.</li>\n<li>Knowledge of statistical techniques and their usage.</li>\n</ul>\n<p>The estimated base salary range for this position is $165,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9ca997fb-218","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755952876477","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$165,000 to $250,000","x-skills-required":["Python","C++","Java","C#","data pipeline engineering","ETL","Airflow","SQL","database","Linux","GitHub","CI/CD","Ph.D.","Masters"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:44.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, C++, Java, C#, data pipeline engineering, ETL, Airflow, SQL, database, Linux, GitHub, CI/CD, Ph.D., Masters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90b5ac1d-d16"},"title":"Senior Software Engineer, Backend — Frontier Data","description":"<p>The Frontier Data team builds the data and systems that power Scale&#39;s most advanced Frontier AI use cases. We&#39;re looking for a Senior Backend Engineer who thrives in ambiguity, moves fast, and enjoys tackling daunting challenges.</p>\n<p>As a Senior Backend Engineer, you will own major backend systems for frontier agentic data products, driving projects from early exploration through production deployment. You will build scalable services and pipelines that support agent workflows, architect modular, reusable backend systems, and operate in high-ambiguity environments.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing and building scalable systems while partnering closely with research, product, operations, and other engineering teams</li>\n<li>Building scalable services and pipelines that support agent workflows</li>\n<li>Architecting modular, reusable backend systems that adapt to evolving product needs</li>\n<li>Operating in high-ambiguity environments and breaking down open-ended problems</li>\n<li>Partnering cross-functionally with product, research/ML, and infrastructure teams</li>\n</ul>\n<p>Ideal experience includes 5+ years of full-time software engineering experience, strong backend engineering fundamentals, and experience building systems that scale.</p>\n<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors.</p>\n<p>Additional benefits include comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90b5ac1d-d16","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Frontier Data","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4648525005","x-work-arrangement":null,"x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Distributed systems","API design","Data modeling","Production reliability","Docker","Containerized development/production environments","SQL","Modern database-backed application development"],"x-skills-preferred":["Async processing","Workflow engines","Data pipelines"],"datePosted":"2026-04-18T16:01:34.567Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems, API design, Data modeling, Production reliability, Docker, Containerized development/production environments, SQL, Modern database-backed application development, Async processing, Workflow engines, Data pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b40b693d-a0d"},"title":"Senior Software Engineer, Agentic Data Products","description":"<p>We&#39;re forming a new Agentic Data Products team focused on building the next generation of agent-powered tools that ground AI in real operational workflows. Our goal is to help enterprises demystify their data layers and deploy intelligent, agentic systems that can reason over data, take action, and deliver measurable outcomes.</p>\n<p>This is a 0→1 build team. We’re looking for a sharp, product-minded Senior Engineer who thrives in ambiguity, moves quickly, and enjoys building new systems from scratch alongside customers and cross-functional partners. You’ll work closely with product, forward-deployed engineers, data scientists, and applied AI teams to turn real-world problems into scalable, production solutions.</p>\n<p>If you like shipping fast, owning outcomes, and working across the stack,from polished frontends to distributed backends to LLM integrations,this role is for you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own major full-stack product areas, driving features from concept and design through production deployment</li>\n<li>Build intuitive, high-performance frontend experiences using React + TypeScript</li>\n<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and AI/ML infrastructure</li>\n<li>Integrate LLMs, vector databases, and agentic frameworks to power intelligent workflows and decision-making systems</li>\n<li>Ship quickly through tight experimentation loops while maintaining high quality and reliability</li>\n<li>Help define the technical direction and architecture of a brand-new team and product surface</li>\n<li>Adapt across the stack and learn new tools as needed to solve real problems end-to-end</li>\n</ul>\n<p><strong>Ideal Experience</strong></p>\n<ul>\n<li>5+ years of full-time software engineering experience</li>\n<li>0-1 product build experience</li>\n<li>Familiarity with LLMs, embeddings, vector databases, or modern AI data products/tools</li>\n<li>Experience with distributed systems and cloud-based architectures</li>\n<li>Prior experience mentoring or leading team</li>\n</ul>\n<p><strong>What We Value</strong></p>\n<ul>\n<li>Strong product intuition and customer empathy</li>\n<li>Bias toward action and rapid iteration</li>\n<li>Ownership mentality , you see problems through to outcomes</li>\n<li>Comfort collaborating across engineering, product, data science, and applied AI</li>\n<li>Excitement about building agentic systems that make AI genuinely useful in the real world</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b40b693d-a0d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4653827005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["React","TypeScript","Python","Distributed systems","Data pipelines","AI/ML infrastructure","LLMs","Vector databases","Agentic frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:01:14.176Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, TypeScript, Python, Distributed systems, Data pipelines, AI/ML infrastructure, LLMs, Vector databases, Agentic frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bebb6dc-380"},"title":"Staff Software Engineer, Platform","description":"<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>\n<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>\n<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>\n<p>Impact and Responsibilities:</p>\n<ul>\n<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>\n</ul>\n<ul>\n<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>\n</ul>\n<ul>\n<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>\n</ul>\n<ul>\n<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>\n</ul>\n<p>Ideally you’d have:</p>\n<ul>\n<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>\n</ul>\n<ul>\n<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>\n</ul>\n<ul>\n<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>\n</ul>\n<ul>\n<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>\n</ul>\n<ul>\n<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>\n</ul>\n<ul>\n<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>\n</ul>\n<ul>\n<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>\n</ul>\n<ul>\n<li>Experience scaling products at hyper-growth startups.</li>\n</ul>\n<ul>\n<li>Excitement to work with AI technologies.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bebb6dc-380","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4649893005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["Software development","Distributed systems","Public cloud platforms","Containerization & deployment technologies","Orchestration platforms","NoSQL document databases","Structured databases","Software engineering best practices","CI/CD tooling"],"x-skills-preferred":["Data warehouses","Data pipeline/ETL tools","Scaling products at hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T16:00:12.545Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software development, Distributed systems, Public cloud platforms, Containerization & deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56e29c57-cd1"},"title":"Robotics Technician","description":"<p>We&#39;re seeking a Robotics Technician to join our team in Mexico City. As a key contributor, you will partner with cross-functional stakeholders to bring up new robots and productionize the maintenance of robots and collection hardware. You will play a critical role in supporting the day-to-day operations of the factory by bringing up and maintaining robots and collection hardware. You will also provide technical support for data collection operations, manage physical inventory, maintain equipment, and coordinate logistics.</p>\n<p>You will become a subject matter expert on all capabilities of the robotics platforms deployed in the factory. You will develop technical domain expertise in areas of 2D and 3D imaging and annotation, multi-sensor fusion and calibration, GPS/INS navigation systems, computer vision, and other autonomy-adjacent concepts.</p>\n<p>You have a Bachelor&#39;s degree or industry experience, an engineering background, preferably in Computer Science, Mathematics, or other Engineering fields. You have 2+ years of experience developing with Python, C++, Java, and/or other scripting languages. You have 1-3 years of experience in hardware labs or a manufacturing environment. You have experience managing risk and operating robots safely. You have strong project management and interpersonal skills, high attention to detail, and a strong sense of ownership. You have a high level of comfort communicating effectively across internal and external organizations.</p>\n<p>Nice to have: hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, and ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, and experience in logistics.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56e29c57-cd1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4635128005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","C++","Java","Robotics","AI","Computer Vision","Multi-sensor fusion and calibration","GPS/INS navigation systems"],"x-skills-preferred":["hands-on experience in Robotics, AI, and/or Computer Vision","intellectual curiosity","empathy","ability to operate with a high degree of autonomy","experience building and/or maintaining lab networks and data pipelines","experience running large-scale data collection and controlled experiments","experience building out facilities","experience in logistics"],"datePosted":"2026-04-18T16:00:01.904Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mexico City, MX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C++, Java, Robotics, AI, Computer Vision, Multi-sensor fusion and calibration, GPS/INS navigation systems, hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, experience in logistics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b703e8a-47c"},"title":"Robotics Engineer","description":"<p>We&#39;re looking for a talented Robotics Engineer to join our team in San Francisco. As a key contributor, you will work to build out our robotics fleet and software systems for collecting data and performing evaluations.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Developing systems for collecting data from various robotics embodiments and collection modalities</li>\n<li>Designing and building hardware for retrofitting robots and building custom collection modalities</li>\n<li>Contributing to the development of pipelines and tooling to support robotics initiatives</li>\n<li>Owning hardware and software integrations for various robots</li>\n<li>Partnering with cross-functional stakeholders to scale up data services</li>\n<li>Providing technical support for data collection operations and executing on pilots to stand up new workflows</li>\n<li>Becoming a subject matter expert on all capabilities of the robotics labs</li>\n</ul>\n<p>You will have the opportunity to develop technical domain expertise in areas of 2D and 3D imaging and annotation, multi-sensor fusion and calibration, computer vision, machine learning, and other autonomy-adjacent concepts.</p>\n<p>We&#39;re looking for someone with a strong engineering background, preferably in Computer Science, Mathematics, or other Engineering fields. You should have 3+ years of experience developing with Python, C++, Java and/or other scripting language, as well as 1-3 years of experience in hardware labs or a manufacturing environment, 1-3 years of experience in mechanical design and comfort with CAD, hands-on experience in robotics, AI, and computer vision, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience managing risk and operating robots safely, strong project management and interpersonal skills, high attention to detail, and a strong sense of ownership.</p>\n<p>As a Robotics Engineer at Scale, you will have the opportunity to work with a talented team of engineers and researchers to develop cutting-edge robotics solutions. You will be responsible for designing, building, and testing robotics systems, as well as collaborating with cross-functional teams to integrate robotics into our data collection and analysis pipeline.</p>\n<p>We offer a competitive salary range of $208,800-$261,000 USD, as well as a comprehensive benefits package, including health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b703e8a-47c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4655744005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$208,800-$261,000 USD","x-skills-required":["Python","C++","Java","Mechanical design","CAD","Robotics","AI","Computer vision","Machine learning","Data pipelines","Lab networks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:13.725Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C++, Java, Mechanical design, CAD, Robotics, AI, Computer vision, Machine learning, Data pipelines, Lab networks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":208800,"maxValue":261000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76c9a01c-58a"},"title":"Data Center Portfolio Planning & Execution Lead","description":"<p>We&#39;re looking for a Data Center Portfolio Planning &amp; Execution Lead to drive the planning and framework that ensures every site moves smoothly from the front-end phases through design, construction, equipment delivery, commissioning, and operational readiness.</p>\n<p>This role owns the portfolio-level operating system: translating capacity supply pipeline into integrated project plans that span every phase of delivery, building the tooling and automation that runs it at scale, and maintaining Anthropic&#39;s datacenter capacity catalog , a lifecycle view of our fleet that supports both execution orchestration and steady-state capacity planning.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Manage the integrated master plan for each site across the portfolio , stitching power ramp, design, construction, sourcing, deployment, and operations readiness into a single coordinated schedule with clear milestones and dependencies</li>\n<li>Develop and maintain Anthropic&#39;s datacenter catalog for deployed and in-progress capacity. Manage the portfolio-level view of physical infrastructure &amp; cluster interfaces across all sites and partners to enable planning decisions such as equipment fungibility, accelerator platforms, tech insertion, or workload allocation</li>\n<li>Define and run the stage gates and decision locks for cluster delivery , from lease execution to design lock through procurement, construction, equipment installation, commissioning, and handover</li>\n<li>Drive gate reviews, manage exceptions, and track the downstream impact of deviations across the portfolio</li>\n<li>Manage portfolio reviews and risk tracking for DC Infra leadership and Compute Supply</li>\n</ul>\n<p>Tooling &amp; process:</p>\n<ul>\n<li>Develop tooling and automation to enable cross-functional planning flow-down from datacenter capacity availability dates</li>\n<li>Partner with Design, Supply Chain, Construction, and DC Ops program leads to drive cross-pillar process improvements as portfolio scales</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Are familiar with the full datacenter buildout lifecycle: pipeline → design → sourcing → construction → Cx → deployment</li>\n<li>Have run integrated portfolio or master-schedule planning across a fleet of capital projects (datacenter, energy, fab, or similar) where multiple functional orgs each own a phase</li>\n<li>Have built a stage-gate or decision-lock system from scratch and gotten functional leads to adopt it</li>\n<li>Have re-architected a deployment or delivery process at scale and can point to the cycle-time or throughput result</li>\n<li>Build the tooling yourself using AI-assisted development , stand up planning dashboards, schedule automation, and data pipelines from Smartsheet/P6/partner systems</li>\n<li>Proactively surface schedule risk across functions , comfortable flagging a problem in someone else&#39;s domain before it becomes a slip</li>\n<li>Track record of driving outcomes through influence with cross-functional partners</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience building a portfolio planning and execution function from scratch at a hyperscaler or large industrial owner</li>\n<li>Exposure to capacity planning or S&amp;OP processes that connect demand forecast to physical build</li>\n<li>Experience product-managing internal planning, workflow, or scheduling systems</li>\n</ul>\n<p>The annual compensation range for this role is $365,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76c9a01c-58a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188939008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$485,000 USD","x-skills-required":["data center portfolio planning","execution lead","portfolio-level operating system","capacity supply pipeline","integrated project plans","tooling and automation","datacenter capacity catalog","lifecycle view of fleet","execution orchestration","steady-state capacity planning","stage gates","decision locks","cluster delivery","lease execution","design lock","procurement","construction","equipment installation","commissioning","handover","cross-functional planning","flow-down","datacenter capacity availability dates","cross-pillar process improvements","AI-assisted development","planning dashboards","schedule automation","data pipelines","Smartsheet","P6","partner systems","schedule risk","cross-functional partners","portfolio planning","execution function","hyperscaler","large industrial owner","capacity planning","S&OP processes","demand forecast","physical build","internal planning","workflow","scheduling systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:03.702Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center portfolio planning, execution lead, portfolio-level operating system, capacity supply pipeline, integrated project plans, tooling and automation, datacenter capacity catalog, lifecycle view of fleet, execution orchestration, steady-state capacity planning, stage gates, decision locks, cluster delivery, lease execution, design lock, procurement, construction, equipment installation, commissioning, handover, cross-functional planning, flow-down, datacenter capacity availability dates, cross-pillar process improvements, AI-assisted development, planning dashboards, schedule automation, data pipelines, Smartsheet, P6, partner systems, schedule risk, cross-functional partners, portfolio planning, execution function, hyperscaler, large industrial owner, capacity planning, S&OP processes, demand forecast, physical build, internal planning, workflow, scheduling systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5f768d1-df6"},"title":"Full-Stack Engineer, AI Data Platform","description":"<p>Shape the Future of AI</p>\n<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>\n<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>\n<ul>\n<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>\n</ul>\n<ul>\n<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>\n</ul>\n<ul>\n<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>\n</ul>\n<p>Why Join Us</p>\n<ul>\n<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>\n</ul>\n<ul>\n<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>\n</ul>\n<ul>\n<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>\n</ul>\n<ul>\n<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>\n</ul>\n<ul>\n<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>\n</ul>\n<p>Role Overview</p>\n<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>\n<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>\n<p>Your Impact</p>\n<ul>\n<li>Own End-to-End Product Features</li>\n</ul>\n<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>\n<ul>\n<li>Enable Human-in-the-Loop AI Training</li>\n</ul>\n<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>\n<ul>\n<li>Support RLHF and Preference Data Workflows</li>\n</ul>\n<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>\n<ul>\n<li>Leverage LLMs in the Review Loop</li>\n</ul>\n<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>\n<ul>\n<li>Advance AI Evaluation</li>\n</ul>\n<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>\n<ul>\n<li>Create Intuitive, Reviewer-Focused Interfaces</li>\n</ul>\n<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>\n<ul>\n<li>Architect Scalable Data &amp; Service Layers</li>\n</ul>\n<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>\n<ul>\n<li>Solve Ambiguous, Real-World Problems</li>\n</ul>\n<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>\n<ul>\n<li>Ensure System Reliability</li>\n</ul>\n<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>\n<ul>\n<li>Elevate the Team</li>\n</ul>\n<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>\n<p>What You Bring</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>2+ years of experience in a software or machine learning engineering role.</li>\n</ul>\n<ul>\n<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>\n</ul>\n<ul>\n<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>\n</ul>\n<ul>\n<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>\n</ul>\n<ul>\n<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<ul>\n<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>\n</ul>\n<ul>\n<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>\n</ul>\n<ul>\n<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>\n</ul>\n<ul>\n<li>Previous experience with search engines (e.g., ElasticSearch).</li>\n</ul>\n<ul>\n<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>\n</ul>\n<p>Engineering at Labelbox</p>\n<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>\n<p>Our Technology Stack</p>\n<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>\n<ul>\n<li>Frontend: React.js with Redux, TypeScript</li>\n</ul>\n<ul>\n<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>\n</ul>\n<ul>\n<li>APIs: GraphQL</li>\n</ul>\n<ul>\n<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>\n</ul>\n<ul>\n<li>Databases: MySQL, Spanner, PostgreSQL</li>\n</ul>\n<ul>\n<li>Queueing / Streaming: Kafka, PubSub</li>\n</ul>\n<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>\n<p>Annual base salary range $130,000-$200,000 USD</p>\n<p>Life at Labelbox</p>\n<ul>\n<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>\n</ul>\n<ul>\n<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>\n</ul>\n<ul>\n<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5f768d1-df6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Labelbox","sameAs":"https://www.labelbox.com/","logo":"https://logos.yubhub.co/labelbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/labelbox/jobs/5019254007","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$130,000-$200,000 USD","x-skills-required":["React","Redux","Node.js","TypeScript","Python","Java","GraphQL","MySQL","PostgreSQL","Spanner","Kafka","PubSub","GCP","Kubernetes","Cloud computing","Containerization","Database management","Cloud infrastructure","API design","Backend services","Data models","Infrastructure"],"x-skills-preferred":["AI tools","Cursor","GitHub Copilot","Data annotation","Monitoring","Agent evaluation","Data infrastructure","Data pipelines","Streaming systems","Storage architectures","Search engines","ElasticSearch","Database optimization","Schema design","Indexing","Query tuning"],"datePosted":"2026-04-18T15:57:55.464Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3238a958-3d9"},"title":"AI Product Manager","description":"<p>We&#39;re looking for an AI Product Manager to own one of the Agent &amp; Reinforcement Learning Environments data verticals, with a focus on Computer Using Agent (CUA) data.</p>\n<p>In this role, you&#39;ll oversee the product roadmap for your data vertical, owning &#39;data as a product&#39;, pipelines for data generation and quality, and researcher-facing tools that help labs train and evaluate intelligent agents in complex environments.</p>\n<p>You&#39;ll work directly with Scale&#39;s most important customers and their leading researchers, representing Scale as the technical expert for your products and influencing both internal and external roadmaps.</p>\n<p>The ideal candidate brings together a strong entrepreneurial &amp; go-to-market mindset, technical depth, and a sense for AI research, enabling them to get in front of technical stakeholders to drive mission-critical outcomes.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the roadmap for the Agent &amp; RL Environment Data vertical, setting product direction and driving execution across engineering, operations, and go-to-market teams.</li>\n</ul>\n<ul>\n<li>Build technical partnerships with research teams at leading AI labs, identifying insights that shape new product lines and competitive strategies for your vertical.</li>\n</ul>\n<ul>\n<li>Design, experiment with, and deliver high-quality data pipelines, tooling, and evaluation frameworks that advance RL and agentic model capabilities.</li>\n</ul>\n<ul>\n<li>Scope out and scale the creation of RL environments that simulate real-world use cases.</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally, influencing business priorities and diving in the weeds of research, operations, and customer interactions.</li>\n</ul>\n<p>Ideally, You&#39;d Have:</p>\n<ul>\n<li>Entrepreneurial mindset: A builder excited by ambiguity and motivated to create new products from the ground up.</li>\n</ul>\n<ul>\n<li>6+ years of experience in product management or a customer-facing role.</li>\n</ul>\n<ul>\n<li>Technical fluency: Software engineering background (a degree in computer science or equivalent experience).</li>\n</ul>\n<ul>\n<li>Understanding of reinforcement learnings, simulation environments, or data pipelines for model training and evaluation</li>\n</ul>\n<ul>\n<li>Strong customer intuition and the ability to translate technical requirements into impactful product decisions.</li>\n</ul>\n<ul>\n<li>Bias for action and comfort wearing multiple hats and operating in fast-moving environments.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3238a958-3d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4609736005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["reinforcement learnings","simulation environments","data pipelines","model training","evaluation frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:37.306Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"reinforcement learnings, simulation environments, data pipelines, model training, evaluation frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95c49f85-a98"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>\n</ul>\n<ul>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n</ul>\n<ul>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n</ul>\n<ul>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n</ul>\n<ul>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n</ul>\n<ul>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n</ul>\n<ul>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n</ul>\n<ul>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n</ul>\n<ul>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n</ul>\n<ul>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n</ul>\n<ul>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n</ul>\n<ul>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n</ul>\n<ul>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n</ul>\n<ul>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95c49f85-a98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5102440008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["observability","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-04-18T15:57:27.177Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af586166-0a0"},"title":"Technical Solutions Specialist, Data Operations","description":"<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>\n<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>\n<p>Responsibilities:</p>\n<p>Data Opportunity Assessment and Processing</p>\n<ul>\n<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>\n<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>\n<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>\n<li>Refine processing logic as requirements evolve across new data types and formats</li>\n</ul>\n<p>Data Delivery Infrastructure, Tooling, and Support</p>\n<ul>\n<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>\n<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>\n<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>\n<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>\n<li>Support occasional research evaluation tasks as needed</li>\n</ul>\n<p>Data Operations and Governance</p>\n<ul>\n<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>\n<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>\n<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>\n</ul>\n<p>You May Be a Good Fit If You</p>\n<ul>\n<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>\n<li>5-7+ years of experience with data pipelines or data engineering workflows</li>\n<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>\n<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>\n<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>\n<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>\n<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>\n<li>Some experience interacting with external third parties delivering data</li>\n</ul>\n<p>Strong Candidates Will Have</p>\n<ul>\n<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>\n<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>\n<li>Experience owning or maintaining a production service or system with uptime expectations</li>\n<li>Familiarity with data governance, compliance, or rights management</li>\n<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>\n<li>Experience leveraging AI to automate workflows</li>\n</ul>\n<p>Candidates Need Not Have</p>\n<ul>\n<li>Deep expertise in AI or machine learning</li>\n<li>A pure software engineering background</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af586166-0a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5056499008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$205,000-$240,000 USD","x-skills-required":["Python","SQL","Cloud infrastructure (AWS, GCP, or Azure)","Data pipelines","Data engineering workflows","Solutions engineering","Partner engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:08.396Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1869fa15-51d"},"title":"Software Engineer, Platform","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>\n<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>\n<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>\n<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>\n<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1869fa15-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4594879005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["software development","distributed systems","public cloud platforms","containerization & deployment technologies","orchestration platforms","NoSQL document databases","structured databases","software engineering best practices","CI/CD tooling"],"x-skills-preferred":["data warehouses","data pipeline/ETL tools","authentication/authorization systems","scaling products at hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T15:57:02.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, distributed systems, public cloud platforms, containerization & deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f24aa64a-8e9"},"title":"DevOps Engineer, GPS","description":"<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>\n<p>You will:</p>\n<ul>\n<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>\n</ul>\n<ul>\n<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>\n</ul>\n<ul>\n<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>\n</ul>\n<ul>\n<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>\n</ul>\n<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f24aa64a-8e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4613839005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Backend Development","System Ownership","Infrastructure Automation","Deployment and Scalability","Disaster Recovery and Hybrid Strategies","Cloud-Native AI Infrastructure","Terraform","CloudFormation","Kubernetes","Python","Typescript","Javascript","C++"],"x-skills-preferred":["Collaboration and Standards","Networking Architecture","CI/CD Pipelines","Containerized Applications","Orchestration Platforms","Data Abstraction","Data Pipelines","Identity & Access Management","Security Tools"],"datePosted":"2026-04-18T15:56:30.346Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity & Access Management, Security Tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_baad2598-8bc"},"title":"Staff / Senior Software Engineer, Compute Capacity","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s Accelerator Capacity Engineering (ACE) team manages one of the largest and fastest-growing accelerator fleets in the industry. As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on.</p>\n<p><strong>What This Team Owns</strong></p>\n<p>The team’s work spans three functional areas: data infrastructure, fleet observability, and compute efficiency. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>\n<p><strong>Data Infrastructure</strong></p>\n<p>Collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against.</p>\n<p><strong>Fleet Observability</strong></p>\n<p>Making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation.</p>\n<p><strong>Compute Efficiency</strong></p>\n<p>Measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery.</li>\n<li>Develop and maintain observability infrastructure , Prometheus recording rules, Grafana dashboards, and alerting systems , that surface actionable signals about fleet health, occupancy, and efficiency.</li>\n<li>Instrument and analyze compute efficiency metrics across training, inference, and eval workloads.</li>\n<li>Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging.</li>\n<li>Operate Kubernetes-native systems at scale , deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>\n<li>Normalize and reconcile data across heterogeneous sources , including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>\n</ul>\n<p><strong>You May Be a Good Fit If You Have</strong></p>\n<ul>\n<li>5+ years of software engineering experience with a strong track record building and operating production systems.</li>\n<li>Kubernetes fluency at operational depth , you’ve operated production K8s at meaningful scale, not just written manifests.</li>\n<li>Data pipeline engineering experience , designing, building, and owning the full lifecycle of production data pipelines.</li>\n<li>Observability tooling experience , Prometheus, PromQL, and Grafana are in the critical path for this team.</li>\n<li>Python and SQL at production quality.</li>\n<li>Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level , compute, billing, usage APIs, cost management tooling.</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Multi-cloud data ingestion experience , especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats.</li>\n<li>Accelerator infrastructure familiarity , GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.</li>\n<li>Performance engineering and benchmarking experience , building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.</li>\n<li>Data-as-product thinking , experience building internal data products with self-service access, schema contracts, API serving, documentation,</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_baad2598-8bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5126702008","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Python","SQL","Prometheus","Grafana","BigQuery","Cloud computing","Data pipeline engineering","Observability tooling"],"x-skills-preferred":["Multi-cloud data ingestion","Accelerator infrastructure","Performance engineering","Data-as-product thinking"],"datePosted":"2026-04-18T15:56:02.706Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Python, SQL, Prometheus, Grafana, BigQuery, Cloud computing, Data pipeline engineering, Observability tooling, Multi-cloud data ingestion, Accelerator infrastructure, Performance engineering, Data-as-product thinking"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_946354fd-05b"},"title":"Specialist Solutions Architect - AI Tooling & Platform Management","description":"<p>As a Specialist Solutions Architect (SSA),AI Tooling &amp; System Management, you will build and manage the AI tooling stack and system infrastructure that empowers Field Engineering to deliver customer outcomes with higher velocity.</p>\n<p>These capabilities will be utilized by our Go-To-Market teams, including Solutions Architects and Account Executives, to accelerate technical demos, proofs of concept, and customer engagements.</p>\n<p>You will bring consistency to our internal AI tooling stack, establish standards for AI-driven development practices, and scale these capabilities across the department.</p>\n<p>A critical aspect of this role is building the infrastructure that enables agent networks to perform with high quality and reliability,including context management systems, data integrations, and supporting tooling.</p>\n<p>Additionally, you will develop internal applications and technical tools that enhance the overall lifecycle, track adoption metrics to measure impact, and partner with stakeholders to drive continuous improvement through intelligent automation and AI-augmented workflows.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Architect production-level AI tooling deployments that meet security, networking, and data integration requirements</li>\n</ul>\n<ul>\n<li>Build and maintain internal AI tooling infrastructure for demos, learning, building POCs, and production workflows across platforms, including AI-assisted development environments, Databricks environments, and cloud-based tooling</li>\n</ul>\n<ul>\n<li>Establish consistency in the AI tooling stack by defining standards, best practices, and reusable patterns that enable Field Engineering to build with AI efficiently and reliably at scale</li>\n</ul>\n<ul>\n<li>Build context management infrastructure for agent networks, including vector databases, knowledge bases, and retrieval systems that ensure AI agents have access to the right information at the right time</li>\n</ul>\n<ul>\n<li>Design and implement system integrations to bring data from enterprise sources into AI applications, ensuring secure, scalable, and reliable data flows</li>\n</ul>\n<ul>\n<li>Develop internal applications to streamline Field Engineering workflows, improve demo and builder environments, and accelerate customer engagement velocity</li>\n</ul>\n<ul>\n<li>Track adoption metrics and tooling effectiveness by instrumenting the AI tooling stack, building dashboards, and providing data-driven insights to leadership on adoption rates, productivity gains, and ROI</li>\n</ul>\n<ul>\n<li>Manage AI tooling infrastructure and spend by overseeing cloud costs, monitoring consumption as teams scale, resolving capacity issues, and deploying automation to reduce operational overhead</li>\n</ul>\n<ul>\n<li>Partner with Scale and Technical Enablement teams to develop documentation, AI-powered development patterns, and training materials</li>\n</ul>\n<ul>\n<li>Support Solution Architects with custom proof of concept environments, AI tooling configurations, and technical guidance for customer engagements</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_946354fd-05b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8409019002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Cloud Platforms & Architecture","AI Tooling","Context Management & Agent Networks","Application Development","Metrics & Analytics","System Integration & Data Pipelines","Security & Platform Administration","Infrastructure Automation & DevOps"],"x-skills-preferred":["Security","System Integrations & Application Deployment","Developer Experience & AI Tooling"],"datePosted":"2026-04-18T15:55:11.227Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Platforms & Architecture, AI Tooling, Context Management & Agent Networks, Application Development, Metrics & Analytics, System Integration & Data Pipelines, Security & Platform Administration, Infrastructure Automation & DevOps, Security, System Integrations & Application Deployment, Developer Experience & AI Tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1f2f48ad-46d"},"title":"Senior Analytics Engineer","description":"<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>\n<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>\n<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>\n<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1f2f48ad-46d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7807847","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["advanced SQL","dbt","Snowflake","data pipeline development","ETL process management","data strategy & governance","advanced analytics & insights","automation & optimization","innovation in data infrastructure","strategic influence"],"x-skills-preferred":["Apache Airflow","Tableau","Looker","Google Analytics","Marketo","Salesforce","Segment","Stripe","Python"],"datePosted":"2026-04-18T15:55:10.503Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy & governance, advanced analytics & insights, automation & optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3beddc8f-183"},"title":"Staff Data Systems Analyst","description":"<p>At ZoomInfo, we&#39;re looking for a Senior Data Systems Analyst to join our team. As a key member of our data operations team, you&#39;ll be responsible for building deep expertise in our company data pipeline, which ingests, processes, and profiles millions of company records. Your primary focus will be on mastering our pipeline architecture, contributing to our infrastructure transition, and leading strategic data improvement initiatives.</p>\n<p>In your first 6-12 months, you&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth. As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Mastering our company data pipeline architecture, including how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n<li>Reading and analyzing production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n<li>Developing frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n<li>Creating clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<ul>\n<li>Contributing to pipeline evolution and infrastructure improvements by participating in design conversations with Engineering and Product, validating pipeline improvements through rigorous testing, and translating data quality investigations and emerging requirements into system-level improvement opportunities</li>\n</ul>\n<ul>\n<li>Solving complex, ambiguous data challenges by leading or contributing to data improvement initiatives that require both systems thinking and creative problem-solving</li>\n</ul>\n<ul>\n<li>Building partnerships and institutional knowledge by developing strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts, conducting impact analyses and validation studies, and documenting your learning, approaches, and insights</li>\n</ul>\n<p>We&#39;re looking for a highly skilled individual with a strong background in data analytics, data engineering, or related technical roles. You should have experience working with data pipelines, ETL systems, or data processing infrastructure, and be able to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility.</p>\n<p>Required qualifications include:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure</li>\n<li>Ability to read and understand code (Python, Java, SQL, or similar)</li>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p>Preferred qualifications include experience with company data, business data, web data acquisition, or data quality initiatives, as well as experience with data profiling, entity resolution, record linkage, or data matching systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3beddc8f-183","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408622002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data analytics","data engineering","data pipelines","ETL systems","data processing infrastructure","Python","Java","SQL","data transformation","system logic","technical feasibility"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:46.937Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data analytics, data engineering, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system logic, technical feasibility"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a2d718a-f65"},"title":"Senior Software Engineer, AI Platform and Enablement","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>\n<p><strong>About the Team</strong></p>\n<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>\n</ul>\n<ul>\n<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>\n</ul>\n<ul>\n<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>\n</ul>\n<ul>\n<li>Optimise and scale our models and algorithms for efficient inference</li>\n</ul>\n<ul>\n<li>Deploy, monitor, and manage AI models in production</li>\n</ul>\n<p><strong>What You Bring</strong></p>\n<ul>\n<li>Experience in deploying and managing AI models in production</li>\n</ul>\n<ul>\n<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>\n</ul>\n<ul>\n<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>\n</ul>\n<ul>\n<li>Knowledge of DevOps and MLOps best practices</li>\n</ul>\n<ul>\n<li>Strong problem-solving abilities and excellent communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Generous healthcare package</li>\n</ul>\n<ul>\n<li>401k matching program</li>\n</ul>\n<ul>\n<li>Catered lunches</li>\n</ul>\n<ul>\n<li>Flexible vacation time</li>\n</ul>\n<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a2d718a-f65","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Descript","sameAs":"https://descript.com/","logo":"https://logos.yubhub.co/descript.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/descript/jobs/7580335003","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $286,000/year","x-skills-required":["Experience in deploying and managing AI models in production","Experience with the tools of large volume data pipelines like spark, flume, dask, etc.","Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes)","Knowledge of DevOps and MLOps best practices","Strong problem-solving abilities and excellent communication skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:12.258Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":286000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3e12d6b2-155"},"title":"Capital Solutions Manager (Data OS, Insight OS)","description":"<p>Behind many of life&#39;s most important transactions , buying a house, applying for a mortgage, getting a small business loan, or refinancing a credit card , is a network of credit relationships. Setpoint provides critical operational infrastructure for relationships between the world&#39;s largest banks, credit funds and capital markets counterparties. We&#39;re building trust in this system of credit.</p>\n<p>We&#39;re looking for a Capital Solutions Manager to join our team and serve as the bridge between our clients and our engineering organisation. You&#39;ll take ownership of live client portfolios across Data OS and Insight OS, our data management and analytics platforms.</p>\n<p>This isn&#39;t a back-office analytics role. You&#39;ll be client-facing from day one, owning deal relationships, translating complex structured finance requirements into engineering specs, and ensuring that every dashboard, data pipeline, and export meets institutional-grade standards.</p>\n<p>This is an opportunity to be an early owner of a fast-growing product lines (Data OS and Insight OS) at a fast-growing platform. You&#39;ll be a co-owner of creating tech solutions for lenders and borrowers in asset backed credit. We have a strong ethos of promoting from within, and you&#39;ll be given ample opportunities for career development and advancement.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own client portfolios end-to-end. Serve as the primary point of contact for assigned clients across Data OS and Insight OS , managing onboarding, success, and supporting growth.</li>\n</ul>\n<ul>\n<li>Translate our customer&#39;s structured finance needs for Engineering. Act as the bridge between our clients and our product/engineering organisation for Data OS and Insight OS. Define what needs to be built, flag what may be custom work, write the specs, review the output, and validate that dashboards and data pipelines match analytical intent.</li>\n</ul>\n<ul>\n<li>Own the accuracy of Setpoint&#39;s data layer across your assigned portfolio. Lead data quality assessments on incoming loan tapes and client deliverables , identifying anomalies, missing fields, and population gaps before they reach production.</li>\n</ul>\n<ul>\n<li>Supervise offshore implementation resources. Directly manage a team of offshore analysts supporting data ingestion, validation, and reporting workflows. Set priorities, review work product, and ensure delivery standards are met.</li>\n</ul>\n<ul>\n<li>Leverage AI-powered workflows and internal tooling. Use and help refine our internal AI-assisted deal workflows , from automated data quality checks to metric design and schema mapping , to accelerate delivery and improve consistency across client portfolios.</li>\n</ul>\n<ul>\n<li>Make us better. Contribute to product priorities, onboarding playbooks, sector templates, and process documentation that make our delivery engine repeatable as the portfolio grows.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3–6 years in private credit, asset-backed lending, or structured finance. You&#39;ve worked with loan tapes, servicer reports, borrowing base certificates, or compliance packages , not just in theory, but hands-on. Experience across multiple asset classes (consumer, auto, fund finance, CRE) is a strong plus.</li>\n</ul>\n<ul>\n<li>Demonstrated client management experience. You&#39;ve owned client relationships , running calls, managing expectations, resolving issues , in a professional services, advisory, or platform context. You&#39;re comfortable being the face of the company to institutional investors and lenders.</li>\n</ul>\n<ul>\n<li>Strong analytical and data skills. Expert-level Excel is baseline. Comfort with SQL, Python, data pipelines, or business intelligence tools (Metabase, Tableau, etc.) is highly valued. You don&#39;t need to write production code, but you should be able to read a data schema, trace a metric back to its source field, and spot when something doesn&#39;t add up.</li>\n</ul>\n<ul>\n<li>Experience writing technical specifications or engineering handoff documents. You&#39;ve translated business requirements into structured artifacts , field mappings, data dictionaries, logic definitions, or acceptance criteria , that a technical team can execute against.</li>\n</ul>\n<ul>\n<li>Comfort with AI/LLM tooling and automation. You don&#39;t need to be an AI engineer, but you should be excited about using AI-assisted workflows to accelerate data analysis, quality checks, and specification writing. Familiarity with prompt engineering or AI copilot tools is a plus.</li>\n</ul>\n<ul>\n<li>Team supervision experience. You&#39;ve managed or coordinated the work of junior analysts, offshore teams, or cross-functional workstreams. You can set priorities, review deliverables, and maintain quality without micromanaging.</li>\n</ul>\n<ul>\n<li>Ability to operate independently in ambiguous environments. You can take a vague client request, figure out what&#39;s actually needed, scope the work, and deliver , without someone laying out every step. You handle tight timelines and competing priorities without losing quality.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We offer a comprehensive benefits package that includes competitive salaries, stock options, medical, dental, and vision coverage, 401(k), short term and long term disability coverage, and flexible vacation. We have offices in Austin, TX, New York City, NY, and Salt Lake City, UT with hybrid roles based in these locations and an expectation of two days a week in office (Tuesdays and Thursdays).</p>\n<p><strong>Compensation</strong></p>\n<p>$140,000 - $160,000 dependent on multiple factors, which may include the successful candidate&#39;s skills, experience and other qualifications.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3e12d6b2-155","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Setpoint","sameAs":"https://setpoint.com/","logo":"https://logos.yubhub.co/setpoint.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/setpoint/jobs/5106278007","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$140,000 - $160,000","x-skills-required":["structured finance","data management","analytics","client relationship management","technical specifications","engineering handoff documents","AI/LLM tooling","automation","team supervision","independent problem-solving"],"x-skills-preferred":["SQL","Python","data pipelines","business intelligence tools","prompt engineering","AI copilot tools"],"datePosted":"2026-04-18T15:53:12.581Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin or New York (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"structured finance, data management, analytics, client relationship management, technical specifications, engineering handoff documents, AI/LLM tooling, automation, team supervision, independent problem-solving, SQL, Python, data pipelines, business intelligence tools, prompt engineering, AI copilot tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":160000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a04373f-0ca"},"title":"Engineering Manager (Integrations)","description":"<p>About Dialpad</p>\n<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>\n<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>\n<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyse conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time.</p>\n<p>Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do. Visit dialpad.com to learn more.</p>\n<p>Being a Dialer</p>\n<p>At Dialpad, AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>\n<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>\n<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>\n<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>\n<p>About the team</p>\n<p>Dialpad’s Salesforce Integrations team plays an essential role in developing a robust layer of integrations that seamlessly connect Dialpad&#39;s products with external services, in particular Salesforce.</p>\n<p>Our teams are highly collaborative and comprise cross-disciplinary professionals, including Product Managers, Designers, QA specialists, as well as Engineers specialising in Full-Stack Engineering, Data Engineering, Data Science, and Telephony.</p>\n<p>Additionally, the integrations team collaborates closely with Dialpad’s Agentic AI organisation to help expand the ecosystem powering Dialpad’s AI agents.</p>\n<p>Your role</p>\n<p>As an Engineering Manager of the Salesforce Integrations team, you will lead a team of 6+ mid-senior full-stack engineers based in London &amp; India.</p>\n<p>Your role will involve closely collaborating with other engineering managers &amp; teams also focusing on integrations to achieve alignment &amp; efficiency whilst delivering multiple simultaneous projects with cross-functional stakeholders.</p>\n<p>Although this is primarily a leadership position, given the current team size technical IC tasks will also be performed including system design, architecture &amp; code reviews &amp; AI-driven development.</p>\n<p>This team is expected to grow in both size, scope &amp; impact with strong potential for additional career opportunities &amp; responsibilities.</p>\n<p>This position reports to our Director of Engineering who is based in Canada.</p>\n<p>Candidates are expected to be flexible with their working hours, ensuring overlap with IST and PST timezones for team meetings, discussions &amp; escalations.</p>\n<p>What you’ll do</p>\n<ul>\n<li>You’ll have direct reports ranging from mid-level to highly experienced engineers, and will support their performance and career growth through regular one-on-ones, performance reviews, coaching, and mentoring.</li>\n</ul>\n<ul>\n<li>Help to define a 1-3 year roadmap &amp; vision for the Salesforce integrations team</li>\n</ul>\n<ul>\n<li>Consistently work with your direct reports to support their career growth</li>\n</ul>\n<ul>\n<li>Assist in evaluating technical design &amp; architecture documents and proposals on an ongoing basis, in anticipation of increased scale and ever-evolving technology to meet the demands of rapidly growing business needs</li>\n</ul>\n<ul>\n<li>Work with geographically distributed peers including engineering managers, technical leaders, product managers, designers, support engineers and other stakeholders in order to align on engineering-wide priorities</li>\n</ul>\n<ul>\n<li>Own large projects end-to-end including requirements gathering, planning, resource allocation, and sometimes execution</li>\n</ul>\n<ul>\n<li>Drive effective engineering processes and policies</li>\n</ul>\n<ul>\n<li>Scale the team by recruiting candidates from diverse backgrounds</li>\n</ul>\n<ul>\n<li>Get hands-on when necessary and assist with technical implementations</li>\n</ul>\n<ul>\n<li>Assist with emerging Agentic AI technologies &amp; initiatives</li>\n</ul>\n<ul>\n<li>Use AI coding tools for active development, reviews, testing, etc.</li>\n</ul>\n<p>What we’re looking for</p>\n<ul>\n<li>3+ years experience leading a high-performing team of engineers, including managing and shipping cross-functional or multi-team projects</li>\n</ul>\n<ul>\n<li>10+ years professional experience as an engineer or engineering leader</li>\n</ul>\n<ul>\n<li>Experience with the development of integrations &amp; APIs</li>\n</ul>\n<ul>\n<li>Hiring &amp; interviewing skills</li>\n</ul>\n<ul>\n<li>Onboarding &amp; mentorship experience</li>\n</ul>\n<ul>\n<li>Agentic AI experience</li>\n</ul>\n<ul>\n<li>Proficiency with ETL data pipelines</li>\n</ul>\n<p>Why Join Dialpad</p>\n<ul>\n<li>Work at the center of the AI transformation in business communications</li>\n</ul>\n<ul>\n<li>Build and ship agentic AI products that are redefining how companies operate</li>\n</ul>\n<ul>\n<li>Join a team where AI amplifies every employee’s impact</li>\n</ul>\n<ul>\n<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>\n</ul>\n<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>\n<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>\n<p>Our exceptional culture, repeatedly recognised as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>\n<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>\n<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a04373f-0ca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8421276002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Agentic AI","APIs","ETL data pipelines","Full-stack engineering","Leadership"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:10.903Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Agentic AI, APIs, ETL data pipelines, Full-stack engineering, Leadership"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d70a8194-b84"},"title":"Software Engineer, Machine Learning","description":"<p>We are seeking a versatile and experienced Machine Learning / AI Engineer to join our growing AI team, working at the intersection of applied machine learning, infrastructure, and product innovation. Your work will drive user productivity, shape new product experiences, and advance the state of AI at Figma.</p>\n<p>As a Machine Learning / AI Engineer, you will design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features. You will also build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</p>\n<p>You will collaborate closely with engineers, researchers, designers, and product managers across multiple teams to deliver high-quality ML-driven features and infrastructure. This is a high-impact, cross-functional role where you will shape both foundational systems and user-facing capabilities.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features.</li>\n<li>Build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</li>\n<li>Collaborate with AI researchers to iterate on datasets, evaluation metrics, and model architectures to improve quality and relevance.</li>\n<li>Work with product engineers to define and deliver impactful AI features across Figma&#39;s platform.</li>\n<li>Partner with infrastructure engineers to develop and optimize systems for training, inference, monitoring, and deployment.</li>\n<li>Explore new ideas at the edge of what&#39;s technically possible and help shape the long-term AI vision at Figma.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of industry experience in software engineering, with 3+ years focused on applied machine learning or AI.</li>\n<li>Strong experience with end-to-end ML model development, including training, evaluation, deployment, and monitoring.</li>\n<li>Proficiency in Python and familiarity with ML libraries like PyTorch, TensorFlow, Scikit-learn, Spark MLlib, or XGBoost.</li>\n<li>Experience designing and building scalable data and annotation pipelines, as well as evaluation systems for AI model quality.</li>\n<li>Experience mentoring or leading others and contributing to a culture of technical excellence and innovation.</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Familiarity with search relevance, ranking, NLP, or RAG systems.</li>\n<li>Experience with AI infrastructure and MLOps, including observability, CI/CD, and automation for ML workflows.</li>\n<li>Experience working on creative or design-focused ML applications.</li>\n<li>Knowledge of additional languages such as C++ or Go is a plus, but not required.</li>\n<li>A product mindset with the ability to tie technical work to user outcomes and business impact.</li>\n<li>Strong collaboration and communication skills, especially when working across functions (engineering, product, research).</li>\n</ul>\n<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d70a8194-b84","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5551532004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$153,000-$376,000 USD","x-skills-required":["Machine Learning","AI","Python","PyTorch","TensorFlow","Scikit-learn","Spark MLlib","XGBoost","Data Pipelines","Annotation Systems","Human-in-the-loop Workflows"],"x-skills-preferred":["Search Relevance","Ranking","NLP","RAG Systems","AI Infrastructure","MLOps","Observability","CI/CD","Automation","Creative or Design-Focused ML Applications"],"datePosted":"2026-04-18T15:53:04.257Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, AI, Python, PyTorch, TensorFlow, Scikit-learn, Spark MLlib, XGBoost, Data Pipelines, Annotation Systems, Human-in-the-loop Workflows, Search Relevance, Ranking, NLP, RAG Systems, AI Infrastructure, MLOps, Observability, CI/CD, Automation, Creative or Design-Focused ML Applications","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153000,"maxValue":376000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_08d03f20-666"},"title":"Finance Systems Integration Engineer","description":"<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>\n<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>\n<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Design, build, and maintain integrations connecting ERP systems with downstream applications, including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions.</li>\n</ul>\n<ul>\n<li>Support integration development and testing during the ERP implementation projects.</li>\n</ul>\n<ul>\n<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication.</li>\n</ul>\n<ul>\n<li>Implement real-time and batch integration patterns supporting high-volume financial transactions.</li>\n</ul>\n<ul>\n<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity.</li>\n</ul>\n<ul>\n<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures.</li>\n</ul>\n<ul>\n<li>Collaborate with implementation consulting partners and vendors on technical integration requirements.</li>\n</ul>\n<p>Additional scope includes AI automation and data infrastructure, including AI agent development, data pipeline support, governance, and collaboration.</p>\n<p>You may be a good fit if you have 8+ years of experience in integration development, data engineering, or systems engineering roles, possess hands-on experience with iPaaS platforms, and have strong programming skills in Python and/or JavaScript/TypeScript.</p>\n<p>Strong candidates may also have experience with high-growth technology companies, background in AI/ML companies, and hands-on experience with specific platforms, including Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, and programming skills in Python/JavaScript.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_08d03f20-666","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155195008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000-$265,000 USD","x-skills-required":["integration development","data engineering","systems engineering","iPaaS platforms","Python","JavaScript/TypeScript","REST APIs","webhooks","OAuth 2.0","secure system-to-system communication","real-time and batch integration patterns","high-volume financial transactions","monitoring","alerting","error-handling frameworks","integration reliability","data integrity","API specifications","troubleshooting procedures"],"x-skills-preferred":["AI automation","data infrastructure","AI agent development","data pipeline support","governance","collaboration","high-growth technology companies","AI/ML companies","specific platforms","Workday Financials","Stripe","Salesforce","Zuora RevPro","Zip Procurement","Clearwater treasury systems","Pigment planning tools","Numeric close management"],"datePosted":"2026-04-18T15:52:53.021Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, REST APIs, webhooks, OAuth 2.0, secure system-to-system communication, real-time and batch integration patterns, high-volume financial transactions, monitoring, alerting, error-handling frameworks, integration reliability, data integrity, API specifications, troubleshooting procedures, AI automation, data infrastructure, AI agent development, data pipeline support, governance, collaboration, high-growth technology companies, AI/ML companies, specific platforms, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_23e0d317-de2"},"title":"GTM Systems Engineer","description":"<p>We&#39;re looking for a GTM Systems Engineer to build the nervous system of our revenue operations. You&#39;ll design systems that handle complex usage-based API pricing, enterprise licenses, self-serve Playground flows, and dual US/German entities. Your focus will be on building the technical architecture of our business, including billing infrastructure, enterprise GTM systems, and analytics infrastructure.</p>\n<p>As a GTM Systems Engineer, you&#39;ll work on building the systems that turn API calls into revenue, transform messy multi-jurisdictional data into clarity, and automate what currently requires 10 people to do manually. You&#39;ll architect integrations between CRM, billing, contracts, and finance, building the workflows that turn enterprise sales from a manual slog into something elegant.</p>\n<p>You&#39;ll be responsible for building the data pipelines and dashboards that show us what&#39;s actually happening: consumption patterns, churn signals, expansion opportunities. Not vanity metrics,the kind of real-time visibility that changes how we make decisions.</p>\n<p>We&#39;re looking for someone with 3-5+ years of experience as a Software Engineer, Systems Engineer, or RevOps Engineer at a B2B AI or SaaS company. You should have programming proficiency in Python, JavaScript/TypeScript, and SQL, as well as CRM expertise and experience building with integration platforms such as Workato, Zapier, or Tray.io.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_23e0d317-de2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Black Forest Labs","sameAs":"https://www.blackforestlabs.com/","logo":"https://logos.yubhub.co/blackforestlabs.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/blackforestlabs/jobs/5045195008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","JavaScript/TypeScript","SQL","CRM expertise","Integration platforms","API development","Data pipelines","Billing systems"],"x-skills-preferred":["Experience at a high-growth startup scaling from $1M to $100M+ ARR","Familiarity with API-first or usage-based products","Experience with data visualization tools"],"datePosted":"2026-04-18T15:52:19.665Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Freiburg (Germany)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, JavaScript/TypeScript, SQL, CRM expertise, Integration platforms, API development, Data pipelines, Billing systems, Experience at a high-growth startup scaling from $1M to $100M+ ARR, Familiarity with API-first or usage-based products, Experience with data visualization tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ea9aa5d2-721"},"title":"Data Engineer Intern (Summer 2026)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. We run one of the world&#39;s largest networks that powers millions of websites and other Internet properties.</p>\n<p>This internship is targeting students with experience and interest in Data Engineering. The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Work through all stages of a data solution lifecycle – analyse / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics</li>\n<li>Knowledge of modern enterprise data architectures, design patterns, and data tool sets and the ability to apply them</li>\n<li>Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives</li>\n<li>Identify key business levers, establish cause &amp; effect, perform analysis, and communicate key findings to various stakeholders to facilitate data driven decision-making</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Currently enrolled in M.S in Computer Science, Engineering or related STEM field</li>\n<li>Experience working with Go, Python, SQL, Java, or equivalent programming languages</li>\n<li>Experience working with distributed systems (Spark etc.)</li>\n<li>Hands-on experience in data pipelines/ frameworks development</li>\n<li>Ability and interest to learn new skills and technologies quickly</li>\n<li>Excellent communication and problem-solving skills</li>\n<li>Ability to commit to a 12 week summer internship</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Familiarity with container based deployments such as Docker and Kubernetes</li>\n<li>Experience with JavaScript, Typescript, and React</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ea9aa5d2-721","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7374706","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Go","Python","SQL","Java","Distributed systems","Data pipelines","Frameworks development"],"x-skills-preferred":["JavaScript","Typescript","React","Docker","Kubernetes"],"datePosted":"2026-04-18T15:52:05.982Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, SQL, Java, Distributed systems, Data pipelines, Frameworks development, JavaScript, Typescript, React, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_72ebb09d-b37"},"title":"Staff+ Software Engineer, Observability","description":"<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p>Strong Candidates May Also Have:</p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_72ebb09d-b37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5139910008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["observability","monitoring","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","operating system administration","cloud computing","containerization","DevOps"],"datePosted":"2026-04-18T15:51:29.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22ff82ac-40b"},"title":"Software Engineer, Research Data Platform","description":"<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>\n<p>As a software engineer on this team, you will:</p>\n<ul>\n<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>\n<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>\n<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>\n<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>\n<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>\n</ul>\n<p>You may be a good fit if you have significant software engineering experience, particularly building data-intensive applications or internal tooling. You should enjoy working directly with users, gathering requirements iteratively, and shipping things that get adopted. You should also be results-oriented, with a bias towards flexibility and impact.</p>\n<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines, high-volume time series data, data cataloging, lineage, or metadata management systems, ML experiment tracking or metrics platforms, complex data visualization, and full-stack web application development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22ff82ac-40b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5191226008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["software engineering","data-intensive applications","internal tooling","data pipelines","storage systems","APIs","libraries","web interfaces","dataset management","data cataloging","provenance tooling","research workflows","adjacent teams"],"x-skills-preferred":["large-scale ETL","columnar storage formats","query engines","high-volume time series data","lineage","metadata management systems","ML experiment tracking","metrics platforms","complex data visualization","full-stack web application development"],"datePosted":"2026-04-18T15:51:29.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, data-intensive applications, internal tooling, data pipelines, storage systems, APIs, libraries, web interfaces, dataset management, data cataloging, provenance tooling, research workflows, adjacent teams, large-scale ETL, columnar storage formats, query engines, high-volume time series data, lineage, metadata management systems, ML experiment tracking, metrics platforms, complex data visualization, full-stack web application development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015afe59-9fd"},"title":"Data Analyst II","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>\n<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>\n<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you’ll do</p>\n<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>\n<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our New York office.</p>\n<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>\n<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>\n<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>\n<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>\n<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>\n<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>\n<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>\n<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>\n<p>Requirements</p>\n<p>3+ years of experience in data analytics or a related role in a professional setting.</p>\n<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>\n<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>\n<p>Experience with Python for data analysis, automation, or scripting.</p>\n<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>\n<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>\n<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>\n<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>\n<p>Bonus points</p>\n<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>\n<p>Familiarity with dbt for data modeling and transformation.</p>\n<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>\n<p>Experience in fintech, financial services, or payments.</p>\n<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>\n<p>Compensation</p>\n<p>The expected salary range for this role is $93,600 - $117,000.</p>\n<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015afe59-9fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463702002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$93,600 - $117,000","x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech","Financial services","Payments"],"datePosted":"2026-04-18T15:50:50.572Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":93600,"maxValue":117000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_059293a1-afa"},"title":"Systems Engineer, Data","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>\n<p>About the Team</p>\n<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>\n<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>\n<p>About the Role</p>\n<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>\n<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>\n<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>\n<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>\n<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>\n<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>\n<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>\n<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>\n<li>Hands-on experience building and debugging data pipelines.</li>\n<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>\n<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>\n<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>\n</ul>\n<p>Desirable Skills</p>\n<ul>\n<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>\n<li>Experience deploying and managing services in Kubernetes.</li>\n<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>\n<li>Interest in or knowledge of machine learning models and MLOps.</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_059293a1-afa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7527453","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data infrastructure","data pipelines","data products","Kubernetes","Trino","Iceberg","Clickhouse","PostgreSQL","Go","Javascript/Typescript","Python","SQL"],"x-skills-preferred":["data orchestration","infrastructure platforms","Airflow","DBT","machine learning models","MLOps"],"datePosted":"2026-04-18T15:50:12.541Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901593ac-ffd"},"title":"Systems Engineer, MAPS","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p><strong>Available Location:</strong></p>\n<p>Austin</p>\n<p><strong>About the Department</strong></p>\n<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>\n<p><strong>About the Team</strong></p>\n<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>\n<p><strong>What are we looking for?</strong></p>\n<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>\n<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>\n<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>\n<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>\n<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>\n<li>Track record of building long-term sustainable, maintainable systems.</li>\n<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>\n<li>Experience with one or more of the following programming languages: Go, Rust, C</li>\n</ul>\n<p><strong>Bonuses</strong></p>\n<ul>\n<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>\n<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>\n<li>Experience with large scale configuration/deployment management.</li>\n</ul>\n<p><strong>What Makes Cloudflare Special?</strong></p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901593ac-ffd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7742773","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineer","distributed systems","large-scale data pipelines","ClickHouse","BigQuery/Dremel","TCP/IP networking fundamentals","routing basics","Linux kernel internals","networking","scheduling","resource isolation","virtualization","Go","Rust","C"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:31.302Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6aab7ed8-23a"},"title":"Senior Software Engineer - Data","description":"<p>We are seeking an experienced Senior Software Engineer (Data) to join our fast-paced, collaborative data team. In this role, you will have broad authority to drive the direction of our technographic data services, building world-class data pipelines and systems to process billions of signals and data points.</p>\n<p>This is an exciting opportunity to solve challenging problems and make a big impact as we invest in making technographics a first-class offering.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and optimize big data pipelines to extract and process signals from the web, job postings, and other sources</li>\n<li>Design and implement data architectures and storage solutions to efficiently handle massive data volumes</li>\n<li>Collaborate closely with data scientists to support and integrate ML models into data workflows</li>\n<li>Continuously improve data quality, performance, and scalability of our technographic data platform</li>\n<li>Drive technical strategy and roadmap for the data processing infrastructure</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Extensive experience building and scaling big data pipelines and architectures from scratch</li>\n<li>Deep expertise in big data frameworks (Hadoop, Spark) and the JVM stack (Java, Scala)</li>\n<li>Strong software engineering fundamentals and ability to write efficient, high-quality code</li>\n<li>Experience with entity recognition and NLP techniques a plus</li>\n<li>Proven track record delivering results and driving projects in a fast-paced environment</li>\n<li>Excellent collaboration and communication skills to work with data scientists, analysts and product teams</li>\n<li>Passion for leveraging huge datasets to power valuable insights</li>\n</ul>\n<p>Ideal Background:</p>\n<ul>\n<li>8+ years of experience in software engineering roles</li>\n<li>Experience working with very large datasets and distributed systems</li>\n<li>Familiarity building data pipelines at large tech companies or data-driven organisations</li>\n<li>Bachelor&#39;s or advanced degree in Computer Science, Engineering or related technical field</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6aab7ed8-23a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8486808002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$220,000 USD","x-skills-required":["big data pipelines","data architectures","storage solutions","ML models","data quality","performance","scalability","data processing infrastructure","Hadoop","Spark","Java","Scala","entity recognition","NLP techniques"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:24.766Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bethesda, Maryland, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data pipelines, data architectures, storage solutions, ML models, data quality, performance, scalability, data processing infrastructure, Hadoop, Spark, Java, Scala, entity recognition, NLP techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_753e9465-6a0"},"title":"Senior Security Software Engineer, eBPF & Security Sensors","description":"<p>We&#39;re seeking an exceptional engineer to join our Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call rotations</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have 7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>\n<li>Have a track record of building and maintaining internal developer tools or security platforms</li>\n<li>Have a strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n<li>Have experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>\n<li>Have experience with infrastructure-as-code (Terraform, CloudFormation)</li>\n<li>Have experience with query optimization for large datasets</li>\n<li>Have experience building stable and scalable services on cloud infrastructure and serverless architectures</li>\n<li>Can write maintainable and secure code in Python</li>\n<li>Have experience working with security teams and translating requirements into technical solutions</li>\n<li>Can lead technical projects with minimal guidance</li>\n<li>Have a track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>\n<li>Can lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n<li>Have strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>\n<li>Have demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>\n<li>Have strong systems thinking with the ability to identify and mitigate risks in complex environments</li>\n</ul>\n<p>Strong candidates may also have experience with</p>\n<ul>\n<li>Building security tooling from the ground up</li>\n<li>Implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Detection engineering or security operations</li>\n<li>SOAR platform or automation development</li>\n<li>Data lake or database architecture</li>\n<li>API design and internal platform creation</li>\n<li>Applying ML/AI to security problems</li>\n<li>Scaling security operations in a high-growth environment</li>\n</ul>\n<p>Logistics</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_753e9465-6a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108521008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering","security","infrastructure","data pipelines","ML-powered detection systems","Claude","Python","test-driven software development","CI/CD","infrastructure-as-code","query optimization","cloud infrastructure","serverless architectures"],"x-skills-preferred":["building security tooling","implementing security monitoring solutions","detection engineering","SOAR platform","automation development","data lake","database architecture","API design","internal platform creation","applying ML/AI to security problems","scaling security operations"],"datePosted":"2026-04-18T15:49:05.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, CH"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, test-driven software development, CI/CD, infrastructure-as-code, query optimization, cloud infrastructure, serverless architectures, building security tooling, implementing security monitoring solutions, detection engineering, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, applying ML/AI to security problems, scaling security operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67b4ccd7-51d"},"title":"Senior Software Engineer, Observability Insights","description":"<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>\n<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>\n<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>\n<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>\n<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>\n<p><strong>About the role</strong></p>\n<ul>\n<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>\n</ul>\n<ul>\n<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>\n</ul>\n<ul>\n<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>\n</ul>\n<ul>\n<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>\n</ul>\n<ul>\n<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>\n</ul>\n<ul>\n<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>\n</ul>\n<ul>\n<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>\n</ul>\n<ul>\n<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>\n</ul>\n<ul>\n<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>\n</ul>\n<ul>\n<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast!</p>\n<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>\n<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67b4ccd7-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650163006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["software engineering","infrastructure engineering","backend systems","distributed APIs","reliability engineering","fault-tolerant design","SLOs","error budgets","multi-tenant system resilience","observability systems","ClickHouse","Loki","VictoriaMetrics","Prometheus","Grafana","agentic applications","LLM-based features","grounding","tool calling","operational safety","Go","Python","Kubernetes","logging","tracing","metrics platforms","cardinality","indexing","query optimization","event streaming","data pipeline management","LLM frameworks","MCP","agent tooling"],"x-skills-preferred":["operating Kubernetes clusters"],"datePosted":"2026-04-18T15:48:46.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f4ab428-1e7"},"title":"Security Technology Deployment Specialist","description":"<p>As a Security Technology Deployment Specialist at Anthropic, you will own the validation, standardization, and deployment of physical security technology across our rapidly expanding global office portfolio. This role bridges the gap between technology selection and production-ready operation , ensuring that every security platform deployed is rigorously tested, properly integrated with enterprise infrastructure, fully documented, and built for scale.</p>\n<p>You&#39;ll define the installation standards, configuration baselines, and deployment processes that the broader team executes against , from access control migrations and intercom replacements to AI analytics onboarding and new application integrations. You&#39;ll work across InfoSec, IT, Networking, and Identity Management to ensure every security application passes review, integrates with SSO, and is supported within Anthropic&#39;s infrastructure before going live. Your work will directly determine whether Anthropic&#39;s security technology stack scales reliably as the company grows from dozens of locations to a global enterprise footprint.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Validate and deploy new and replacement security technology platforms including access control systems, intercom solutions, video management, visitor management, and AI/analytics tools across all Anthropic locations</li>\n</ul>\n<ul>\n<li>Build and maintain staging environments for pre-production testing and validation of all security applications, hardware, firmware, and system configurations</li>\n</ul>\n<ul>\n<li>Define installation standards, configuration baselines, licensing structures, update procedures, and maintenance requirements for every deployed security platform</li>\n</ul>\n<ul>\n<li>Deploy integrations between security applications, validating that platforms communicate and share data correctly before transitioning to production</li>\n</ul>\n<ul>\n<li>Support colleagues&#39; security applications through InfoSec review processes, ensuring new tools meet Anthropic&#39;s information security and compliance requirements</li>\n</ul>\n<ul>\n<li>Coordinate SSO integration for newly deployed security applications with Identity Management and IT teams</li>\n</ul>\n<ul>\n<li>Transition applications requiring custom integration or data pipeline development to the IT Engineering team with documented technical requirements for roadmap inclusion</li>\n</ul>\n<ul>\n<li>Initiate onboarding of deployed hardware and systems into Anthropic&#39;s health monitoring platform to ensure operational visibility from day one</li>\n</ul>\n<ul>\n<li>Develop standardized deployment playbooks, checklists, configuration templates, and handoff documentation that enable repeatable installations across all current and future sites</li>\n</ul>\n<ul>\n<li>Evaluate security platforms for scalability, identifying capacity constraints, single points of failure, and architectural limitations before they impact operations at scale</li>\n</ul>\n<ul>\n<li>Coordinate with Networking, IT Infrastructure, and Facilities teams to ensure all infrastructure prerequisites (network, power, rack space, cloud resources) are met prior to deployment</li>\n</ul>\n<ul>\n<li>Execute structured handoffs to Project Management (for site programming), Break-Fix Support (for maintenance), and Access Control Administration (for ongoing system management), ensuring each team has the standards and documentation to execute independently</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of hands-on experience deploying, validating, and managing enterprise physical security technology across a large or rapidly growing organization</li>\n</ul>\n<ul>\n<li>Experience working across InfoSec, IT, Networking, and Identity Management teams to onboard and integrate security applications into enterprise environments</li>\n</ul>\n<ul>\n<li>Strong technical communication skills, with the ability to define standards clearly enough that PMs, integrators, and service teams execute against them without ambiguity</li>\n</ul>\n<ul>\n<li>Experience with IP networking, VLANs, PoE, and infrastructure requirements for security devices</li>\n</ul>\n<ul>\n<li>Comfortable with 25% travel for site deployments, commissioning, and validation</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Previous experience at a hyper-growth technology company or managing security technology programs for high-profile corporate environments</li>\n</ul>\n<ul>\n<li>Experience with Anthropic&#39;s specific technology stack: Genetec Security Center, Axis cameras, Wavelynx, Commend Symphony Cloud, Alcatraz.ai, Ambient.ai, SureView, Envoy</li>\n</ul>\n<ul>\n<li>Industry certifications: Genetec, Axis, CCNA, PSP, CPP, or PMP</li>\n</ul>\n<ul>\n<li>Experience with OSDP, modern credential technologies, and encryption protocols for physical security systems</li>\n</ul>\n<ul>\n<li>Familiarity with scripting or automation (Python, PowerShell) for configuration management and deployment automation</li>\n</ul>\n<ul>\n<li>Experience with health monitoring and observability platforms</li>\n</ul>\n<ul>\n<li>Experience with change management, configuration control, and version-controlled infrastructure documentation</li>\n</ul>\n<p>Salary Range: $175,000-$220,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f4ab428-1e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5123587008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000-$220,000 USD","x-skills-required":["security technology deployment","physical security technology","access control systems","intercom solutions","video management","visitor management","AI/analytics tools","InfoSec","IT","Networking","Identity Management","SSO integration","custom integration","data pipeline development","health monitoring platform","deployment playbooks","checklists","configuration templates","handoff documentation","scalability analysis","infrastructure prerequisites","structured handoffs"],"x-skills-preferred":["Genetec Security Center","Axis cameras","Wavelynx","Commend Symphony Cloud","Alcatraz.ai","Ambient.ai","SureView","Envoy","OSDP","modern credential technologies","encryption protocols","scripting","automation","Python","PowerShell","health monitoring","observability platforms","change management","configuration control","version-controlled infrastructure documentation"],"datePosted":"2026-04-18T15:48:43.816Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security technology deployment, physical security technology, access control systems, intercom solutions, video management, visitor management, AI/analytics tools, InfoSec, IT, Networking, Identity Management, SSO integration, custom integration, data pipeline development, health monitoring platform, deployment playbooks, checklists, configuration templates, handoff documentation, scalability analysis, infrastructure prerequisites, structured handoffs, Genetec Security Center, Axis cameras, Wavelynx, Commend Symphony Cloud, Alcatraz.ai, Ambient.ai, SureView, Envoy, OSDP, modern credential technologies, encryption protocols, scripting, automation, Python, PowerShell, health monitoring, observability platforms, change management, configuration control, version-controlled infrastructure documentation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bf25e8de-318"},"title":"Director of Engineering (Data Infrastructure)","description":"<p>Job Title: Director of Engineering (Data Infrastructure)</p>\n<p>Location: Bengaluru, India</p>\n<p>We&#39;re looking for a seasoned Director of Engineering to lead our data infrastructure organization in Bengaluru. As a founding technical leader in our fastest-growing engineering hub, you will be responsible for building world-class teams and shaping architectural decisions that ripple across the company.</p>\n<p>About the Role:</p>\n<ul>\n<li>You will build the data infrastructure organization that makes Databricks&#39; continued growth possible.</li>\n<li>Establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack.</li>\n<li>Define what world-class infrastructure looks like for the next decade of data platforms.</li>\n</ul>\n<p>Responsibilities:</p>\n<ul>\n<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error.</li>\n<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent.</li>\n<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain.</li>\n<li>Ship platforms that compound engineering leverage across Databricks.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences.</li>\n<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability.</li>\n<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly.</li>\n<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability.</li>\n<li>Proven ability to scale infrastructure organizations in high-growth environments.</li>\n<li>Communication skills to make complex infrastructure decisions legible to executives.</li>\n</ul>\n<p>What You&#39;ll Need:</p>\n<ul>\n<li>BS in Computer Science or Engineering; MS or Ph.D. preferred.</li>\n<li>Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>\n<p>Our Commitment to Diversity and Inclusion:</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>\n<p>Compliance:</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to grant such access.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bf25e8de-318","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8290810002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems engineering","infrastructure organizations","petabyte-scale data pipelines","distributed systems reliability","SLOs/SLIs","chaos engineering","disaster recovery","observability"],"x-skills-preferred":["Apache Spark","Delta Lake","large-scale data infrastructure","fintech/billing systems"],"datePosted":"2026-04-18T15:48:43.683Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems engineering, infrastructure organizations, petabyte-scale data pipelines, distributed systems reliability, SLOs/SLIs, chaos engineering, disaster recovery, observability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_beca8c16-9a6"},"title":"Director of Engineering (Data Infrastructure)","description":"<p>Job Title: Director of Engineering (Data Infrastructure)</p>\n<p>In this leadership opportunity, you will build the data infrastructure organization that makes Databricks&#39; continued growth possible. You&#39;ll establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack, alongside multi-region data ingestion, developer platforms, and deployment automation that eliminate friction at petabyte scale.</p>\n<p>This isn&#39;t about maintaining what exists; it&#39;s about architecting the infrastructure that enables Databricks to scale while reducing operational burden. You&#39;ll define what world-class infrastructure looks like for the next decade of data platforms.</p>\n<p>The impact you&#39;ll have:</p>\n<ul>\n<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error, building disaster recovery that&#39;s provably reliable, testing frameworks that catch what production sees, correctness systems that make billing errors structurally impossible, and observability that predicts failures before they happen</li>\n</ul>\n<ul>\n<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent, hiring multiple engineering managers who become force multipliers, and creating a culture where solving hard distributed systems problems at scale is the daily work</li>\n</ul>\n<ul>\n<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain, driving reliability improvements that prevent millions in revenue loss while eliminating operational toil through frameworks that make systems self-healing, self-tuning, and self-documenting</li>\n</ul>\n<ul>\n<li>Ship platforms that compound engineering leverage across Databricks: correctness frameworks that catch billing errors before customers do, deployment automation that makes regional expansion push-button, data integration systems that process petabyte-scale flows without human intervention, and testing infrastructure where comprehensive coverage is automatic, not heroic</li>\n</ul>\n<ul>\n<li>Position infrastructure as product by treating internal engineering teams as customers with SLAs, measuring adoption and satisfaction, iterating based on feedback, and demonstrating that every dollar invested in infrastructure returns multiplicative gains in product velocity, reliability improvements, or cost reductions</li>\n</ul>\n<p>You&#39;ll need:</p>\n<ul>\n<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences - and you built the systems and teams that made those failures rare</li>\n</ul>\n<ul>\n<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability where you can engage from &#39;how should we architect multi-region disaster recovery&#39; to &#39;why is this Kafka cluster exhibiting this latency pattern&#39; while knowing when to coach versus when to decide</li>\n</ul>\n<ul>\n<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly while building toward architectural end states, positioning infrastructure investments as business enablers rather than cost centers, and making build-vs-buy decisions that compound over time</li>\n</ul>\n<ul>\n<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability that predicts failures before they happen</li>\n</ul>\n<ul>\n<li>Proven ability to scale infrastructure organizations in high-growth environments where you&#39;ve doubled engineering while maintaining quality bar, developed engineering managers, and created teams where retention is high because the problems are interesting and the culture is strong</li>\n</ul>\n<ul>\n<li>Communication skills to make complex infrastructure decisions legible to executives (translating technical investments into business outcomes), influence cross-functional partners without authority, build trust across global teams in different timezones with different working styles, and represent Databricks&#39; technical brand externally</li>\n</ul>\n<p>BS in Computer Science or Engineering; MS or Ph.D. preferred. Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_beca8c16-9a6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8220993002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems engineering","infrastructure organization","petabyte-scale data pipelines","distributed systems reliability","Apache Spark","Delta Lake","large-scale data infrastructure","fintech/billing systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:18.029Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems engineering, infrastructure organization, petabyte-scale data pipelines, distributed systems reliability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcc14ac2-f76"},"title":"Security Software Engineer, Detection & Response Platform","description":"<p>weeted job ad in markdown with  line breaks</p>\n<p><strong>About the role</strong></p>\n<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build AI-powered platform responsible for all aspects of D&amp;R capabilities from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call shifts</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>7+ years of experience in software engineering with a focus on security, infrastructure and/or data pipelines</li>\n<li>Track record of building and maintaining internal developer tools or security platforms</li>\n<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>Experience building security tooling from the ground up</li>\n<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Background in detection engineering or security operations</li>\n<li>SOAR platform/automation development</li>\n<li>Data lake / Database architecture</li>\n<li>API design and internal platform creation</li>\n<li>Track record of applying ML/AI to security problems</li>\n<li>Experience scaling security operations in a high-growth environment</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcc14ac2-f76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4595463008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","Data pipelines","ML-powered detection systems","Security telemetry","Claude","Security operations","Incident response"],"x-skills-preferred":["Experience building security tooling from the ground up","Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)","Background in detection engineering or security operations","SOAR platform/automation development","Data lake / Database architecture","API design and internal platform creation","Track record of applying ML/AI to security problems","Experience scaling security operations in a high-growth environment"],"datePosted":"2026-04-18T15:47:49.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Data pipelines, ML-powered detection systems, Security telemetry, Claude, Security operations, Incident response, Experience building security tooling from the ground up, Background in implementing security monitoring solutions (SIEM, log aggregation, EDR), Background in detection engineering or security operations, SOAR platform/automation development, Data lake / Database architecture, API design and internal platform creation, Track record of applying ML/AI to security problems, Experience scaling security operations in a high-growth environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12b90b60-93b"},"title":"Senior Software Engineer, Service Tools","description":"<p>가요 ROLE DETAILS</p>\n<p>As a Senior Software Engineer on the Service Tools team, you will play a key role in enabling Airbnb&#39;s backend developers to develop, test, and maintain their code quickly and reliably. The Service Tools team is responsible for the standard development lifecycle for service owners, including AI integration, integration tests, and building services for deployment.</p>\n<p>A TYPICAL DAY</p>\n<p>As an engineer on Service Tools, you will work on technologies that help shape an industry-leading end-to-end developer experience for backend developers. Your responsibilities will include:</p>\n<ul>\n<li>Building our next-gen build system using the latest technologies (e.g., Bazel)</li>\n<li>Working on integrations between the build system and CI/CD tooling (e.g., merge queues, code coverage, integration testing)</li>\n<li>Improving the editor (e.g., IntelliJ) experience for all backend developers</li>\n<li>Helping to shape the technical strategy that directly moves our core metrics (Developer Experience, Developer Velocity, Debuggability, Resilience and Reliability) while reducing cost</li>\n<li>Partnering with engineering leaders across all Airbnb teams for adoption of the new capabilities</li>\n</ul>\n<p>YOUR EXPERTISE</p>\n<p>To be successful in this role, you will need to have:</p>\n<ul>\n<li>6+ years of industry experience</li>\n<li>Proficiency in one or more back-end server languages (Java/Ruby/Go/C++/etc.)</li>\n<li>Experienced in architectural patterns of a high-scale distributed products/services, such as well-designed APIs, data pipelines and efficient algorithms</li>\n<li>Experience or desire to work collaboratively in cross-functional teams with design, product and data science partners</li>\n<li>Experience working directly on build systems, and even better if you have hands-on experience with Bazel</li>\n<li>Experience working with large monorepos</li>\n<li>Extensive JVM experience</li>\n<li>Want to tackle projects with large open-ended scope and drive significant business impact</li>\n<li>Love collaborating via product reviews, code reviews and architecture discussions</li>\n<li>Are motivated to improve their teammates&#39; productivity</li>\n<li>Are excited to join an impactful infrastructure team</li>\n</ul>\n<p>OUR COMMITMENT TO INCLUSION &amp; BELONGING</p>\n<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12b90b60-93b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7490348","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Bazel","CI/CD","IntelliJ","Java","Ruby","Go","C++","APIs","data pipelines","algorithms"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:28.724Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bazel, CI/CD, IntelliJ, Java, Ruby, Go, C++, APIs, data pipelines, algorithms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fb05e37d-811"},"title":"Senior Staff Machine Learning Engineer, Data & Eval","description":"<p>We&#39;re looking for a Senior Staff Machine Learning Engineer to join our Core ML team. As a key member of our team, you will be responsible for driving CSxAI (Customer Support x Artificial Intelligence) initiatives by adopting Generative AI technologies to enable an intelligent, scalable, and exceptional service experience.</p>\n<p>In this role, you will set technical direction and lead execution for ML evaluation and the end-to-end data flywheel powering CSxAI products. Your work will define how we measure quality, how we turn feedback into learning signals, and how we continuously improve models and products safely and efficiently.</p>\n<p>You will partner closely with product, engineering, design, and operations to build evaluation systems that are trusted, scalable, and actionable - connecting offline metrics to online outcomes.</p>\n<p>A typical day in this role will involve working with large-scale structured and unstructured data, exploring, experimenting, building, and continuously improving Machine Learning models and pipelines for Airbnb product, business, and operational use cases.</p>\n<p>You will work collaboratively with cross-functional partners, including product managers, operations, and data scientists, to identify opportunities for business impact, understand, refine, and prioritize requirements for machine learning, and drive engineering decisions.</p>\n<p>Hands-on development, productionization, and operation of Machine Learning models and pipelines at scale, including both batch and real-time use cases, will also be a key part of this role.</p>\n<p>You will leverage third-party and in-house Machine Learning tools and infrastructure to develop reusable, highly differentiating, and high-performing Machine Learning systems, enable fast model development, low-latency serving, and ease of model quality upkeep.</p>\n<p>Your expertise will be critical in defining evaluation strategy and success metrics for GenAI systems, aligning offline evaluation with online business and customer experience outcomes.</p>\n<p>You will build and scale evaluation frameworks with strong controls for bias, drift, and reliability, design the data flywheel, and lead cross-functional quality initiatives across product, ops, and engineering.</p>\n<p>You will develop and productionize pipelines for dataset creation, model monitoring, evaluation-at-scale, and continuous testing, and drive technical decisions and architecture for evaluation and data infrastructure.</p>\n<p>Minimum qualifications for this role include a PhD in Computer Science, Mathematics, Statistics, or a related technical field, industry experience of 10+ years building, testing, and shipping ML/AI systems end-to-end, and leadership experience of 5+ years leading large, ambiguous technical initiatives as a senior IC.</p>\n<p>Preferred qualifications include customer support systems experience, infrastructure and quality at scale experience, agile practice for applied AI experience, and continuous learner experience.</p>\n<p>This position is US-Remote Eligible, and the role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager.</p>\n<p>Our job titles may span more than one career level, and the actual base pay is dependent upon many factors, such as training, transferable skills, work experience, business needs, and market demands.</p>\n<p>The base pay range is $244,000-$305,000 USD, and this role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fb05e37d-811","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/6757302","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$244,000-$305,000 USD","x-skills-required":["evaluation methodology","GenAI systems","data pipelines","quality systems","ML fundamentals","best practices"],"x-skills-preferred":["customer support systems","infrastructure and quality at scale","agile practice for applied AI","continuous learner"],"datePosted":"2026-04-18T15:46:15.475Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"evaluation methodology, GenAI systems, data pipelines, quality systems, ML fundamentals, best practices, customer support systems, infrastructure and quality at scale, agile practice for applied AI, continuous learner","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":244000,"maxValue":305000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9661d798-56c"},"title":"Staff Software Engineer (Data Platform)","description":"<p>We are seeking a Staff Software Engineer to join our Data Platform team. As a Staff Software Engineer, you will help build the Data Intelligence Platform for Databricks, allowing us to automate decision-making across the entire company. You will collaborate with Databricks Product Teams, Data Science, Applied AI, and other teams to develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, and data consumption layers. You will use the latest Databricks product and other tools in the data ecosystem to design and run the Databricks metrics store, cross-company Data Intelligence Platform, and tooling and infrastructure to efficiently manage and run Databricks at scale.</p>\n<p>The impact you will have includes designing and running the Databricks metrics store, cross-company Data Intelligence Platform, and developing tooling and infrastructure to efficiently manage and run Databricks at scale. You will also design the base ETL framework used by all pipelines developed at the company, partner with engineering teams to provide leadership in developing the long-term vision and requirements for the Databricks product, and establish conventions and create new APIs for telemetry, debug, feature, and audit event log data.</p>\n<p>To be successful in this role, you will need 12+ years of industry experience, 4+ years of experience building large-scale distributed systems, and 5+ years of providing technical leadership on large projects similar to the ones described above. You will also need experience with ETL frameworks, metrics stores, infrastructure management, data security, and experience building, shipping, and operating reliable multi-geo data pipelines at scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9661d798-56c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7652016002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL frameworks","metrics stores","infrastructure management","data security","large-scale distributed systems","technical leadership","data pipelines","workflow or orchestration frameworks","messaging systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:14.959Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL frameworks, metrics stores, infrastructure management, data security, large-scale distributed systems, technical leadership, data pipelines, workflow or orchestration frameworks, messaging systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0dce91e4-37f"},"title":"Member of Technical Staff - RL Infrastructure","description":"<p>We&#39;re seeking experienced software engineers to create robust data pipelines, comprehensive evaluations for benchmarking LLMs, and automation frameworks to increase the productivity of researchers and engineers.</p>\n<p>Typical problems you will deal with include designing efficient and robust environments for AI agents, improving evaluations and observability, onboarding new evaluation datasets, standardizing preprocessing pipelines, and creating data augmentation pipelines.</p>\n<p>Responsibilities include creating and maintaining frameworks for agent, data, and model evaluation tasks, building environments for AI agents, tools for automating common workflows, improving alerts, metrics, and error handling on large-scale RL jobs, refactoring existing frameworks for better modularity, and designing operation procedures and coding standards.</p>\n<p>Basic qualifications include experience building and maintaining frameworks, building high-performance sandboxes, virtual machines, and simulations, building full-stack apps for automating workflows and data visualization, rapid iteration of research to production cycles, and test automation, CI/CD.</p>\n<p>Base salary is $180,000 - $440,000 USD, and our total rewards package includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0dce91e4-37f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4715499007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["frameworks","data pipelines","evaluation tasks","AI agents","virtual machines","simulations","test automation","CI/CD"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:14.486Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"frameworks, data pipelines, evaluation tasks, AI agents, virtual machines, simulations, test automation, CI/CD","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc923f59-e03"},"title":"Senior Data Engineering Analyst","description":"<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>\n<p>We&#39;re seeking a Senior Data Systems Analyst to become the expert on our company data pipeline,the system that ingests, processes, and profiles millions of company records that power our customers&#39; go-to-market strategies. In this role, you&#39;ll build deep expertise in how our company data flows from acquisition through profiling and output. You&#39;ll read code to understand data transformations and system dependencies, bring informed opinions to design conversations with Engineering and Product, and help shape the evolution of our next-generation data infrastructure.</p>\n<p>As you build mastery of our systems, you&#39;ll increasingly lead strategic data improvement initiatives that require both systems thinking and creative problem-solving. This isn&#39;t about building dashboards or SQL reports. This is about understanding data systems at an architectural level, solving ambiguous data challenges, and ensuring our pipeline infrastructure continuously evolves to meet customer needs and maintain competitive advantage.</p>\n<p>You&#39;ll work closely with other data analysts during an active infrastructure transition period, and as systems stabilize and your expertise deepens, you&#39;ll progressively own more of the pipeline architecture and strategic initiatives. This is a role with significant growth runway for someone who wants to become the go-to technical expert on company data systems.</p>\n<p><strong>Who You Are</strong></p>\n<p>Systems Thinker with Technical Depth: You understand how data systems work, not just what they produce. You&#39;ve worked with data pipelines, ETL systems, or data processing infrastructure,maybe you&#39;ve improved one, debugged one, or owned components of one. You can read code (Python, Java, SQL, or similar) well enough to understand data transformations and trace how data flows through systems.</p>\n<p>Opinionated Technical Contributor: You don&#39;t just execute,you have informed opinions on how things should work. You can assess technical tradeoffs, evaluate whether a proposed solution is feasible, and contribute meaningfully to design conversations with engineers.</p>\n<p>Growth-Oriented Problem Solver: You&#39;re excited to build deep expertise in a complex domain and grow into leading strategic initiatives. You&#39;ve tackled ambiguous problems that required figuring things out as you went, and you want to expand your project leadership capabilities in a systems-focused environment.</p>\n<p>Analytical and Hands-On: You&#39;re equally comfortable writing code to analyze data patterns and manually investigating edge cases to understand what&#39;s really happening. You dig into details when needed and know when to zoom out to see the bigger picture.</p>\n<p>Clear Communicator: You can explain technical complexity to non-technical audiences. You&#39;ve worked effectively with Engineering, Product, or cross-functional teams, translating between technical constraints and business needs.</p>\n<p>Comfortable with Ambiguity: You thrive in evolving environments where priorities shift and problems aren&#39;t always well-defined. You maintain momentum and quality even when the path forward isn&#39;t perfectly clear.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>\n<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p><strong>Build Deep Pipeline &amp; Systems Expertise</strong></p>\n<ul>\n<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n</ul>\n<ul>\n<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n</ul>\n<ul>\n<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n</ul>\n<ul>\n<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<p><strong>Contribute to Pipeline Evolution &amp; Infrastructure Improvements</strong></p>\n<ul>\n<li>Participate actively in design conversations with Engineering and Product about our next-generation pipeline, bringing data quality insights, technical feasibility assessments, and informed opinions on architectural decisions</li>\n</ul>\n<ul>\n<li>Help validate pipeline improvements through rigorous testing, impact analysis, and hands-on verification of data quality</li>\n</ul>\n<ul>\n<li>Translate data quality investigations and emerging requirements into system-level improvement opportunities</li>\n</ul>\n<ul>\n<li>Collaborate with team members to determine when problems should be solved at the pipeline/profiler level versus through downstream approaches</li>\n</ul>\n<p><strong>Solve Complex, Ambiguous Data Challenges</strong></p>\n<ul>\n<li>Lead or contribute to data improvement initiatives that require both systems thinking and creative problem-solving,such as improving location verification across international markets, integrating new data sources, or solving novel data extraction challenges</li>\n</ul>\n<ul>\n<li>Tackle problems where the solution isn&#39;t obvious through a blend of code analysis, manual investigation, cross-functional coordination, and iterative problem-solving</li>\n</ul>\n<ul>\n<li>Build and apply repeatable approaches to testing, validation, and root cause analysis</li>\n</ul>\n<p><strong>Build Partnerships &amp; Institutional Knowledge</strong></p>\n<ul>\n<li>Develop strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts</li>\n</ul>\n<ul>\n<li>Conduct impact analyses and validation studies to ensure proposed changes deliver intended outcomes</li>\n</ul>\n<ul>\n<li>Document your learning, approaches, and insights so knowledge is shared and institutional memory builds across the team</li>\n</ul>\n<ul>\n<li>Serve as a technical resource as you develop expertise, helping bridge immediate data quality needs with long-term pipeline capabilities</li>\n</ul>\n<p><strong>What You&#39;ll Bring</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n</ul>\n<ul>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n</ul>\n<ul>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>\n</ul>\n<ul>\n<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>\n</ul>\n<ul>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n</ul>\n<ul>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n</ul>\n<ul>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n</ul>\n<ul>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n</ul>\n<ul>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n</ul>\n<ul>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p><strong>Strongly Preferred</strong></p>\n<ul>\n<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>\n</ul>\n<ul>\n<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>\n</ul>\n<ul>\n<li>Background contribution</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc923f59-e03","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408637002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data analysis","data pipelines","ETL systems","data processing infrastructure","Python","Java","SQL","data transformation","system dependencies","data quality","data profiling","entity resolution","record linkage","data matching"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:06.666Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data analysis, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system dependencies, data quality, data profiling, entity resolution, record linkage, data matching"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d22e39a-bde"},"title":"Data Analyst II","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>\n<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>\n<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you’ll do</p>\n<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>\n<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our San Francisco office.</p>\n<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>\n<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>\n<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>\n<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>\n<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>\n<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>\n<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>\n<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>\n<p>Requirements</p>\n<p>3+ years of experience in data analytics or a related role in a professional setting.</p>\n<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>\n<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>\n<p>Experience with Python for data analysis, automation, or scripting.</p>\n<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>\n<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>\n<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>\n<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>\n<p>Bonus points</p>\n<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>\n<p>Familiarity with dbt for data modeling and transformation.</p>\n<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>\n<p>Experience in fintech, financial services, or payments.</p>\n<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>\n<p>Compensation</p>\n<p>The expected salary range for this role is $93,600 - $117,000.</p>\n<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d22e39a-bde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463696002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$93,600 - $117,000","x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech","Financial services","Payments"],"datePosted":"2026-04-18T15:44:50.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":93600,"maxValue":117000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4cc3bc0-a5d"},"title":"Senior Analytics Engineer","description":"<p><strong>Job Title: Senior Analytics Engineer</strong></p>\n<p>You&#39;ll be part of a team that empowers you to do the best work of your life. As a Senior Analytics Engineer at ZoomInfo, you&#39;ll be responsible for building deep expertise in our company data pipeline architecture.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<p><strong>What You&#39;ll Do:</strong></p>\n<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>\n<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>\n<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>\n<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>\n<li>Background contributing to</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4cc3bc0-a5d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408633002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data pipeline architecture","data transformation","ETL systems","data processing infrastructure","Python","SQL","data analysis","data manipulation","ambiguous data problems","data quality initiatives"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:11.964Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data pipeline architecture, data transformation, ETL systems, data processing infrastructure, Python, SQL, data analysis, data manipulation, ambiguous data problems, data quality initiatives"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_36c9df64-9b2"},"title":"Senior Software Engineer, App Foundation (Backend)","description":"<p>Join Airbnb&#39;s App Foundation team, a cross-platform team that builds high-quality and performant capabilities that power almost all features in the Guest and Host ecosystem.</p>\n<p>As a Senior Software Engineer, you will be responsible for exploring, shaping, and developing new product experiences alongside cross-functional partners (design and product); from ideation to implementation at scale.</p>\n<p>You will build efficient and reusable backend capabilities, with high quality, while making sure to maintain performance and scalable systems.</p>\n<p>Lead initiatives that measurably improve Guest and Host experience by improving app responsiveness, scale efficiently and reliability across key backend paths that impact millions.</p>\n<p>Drive a performance roadmap: identifying bottlenecks, prioritizing work by impact, and delivering improvements across services, data access patterns, and infrastructure.</p>\n<p>Raise the bar on performance engineering by building tooling, benchmarks, and guardrails that prevent regressions and make performance a first-class part of how teams ship.</p>\n<p>Influence architecture and standards across Airbnb’s backend ecosystem, making systems more observable, more efficient, and easier to evolve.</p>\n<p>Millions of users across the world engage with the Airbnb app in multiple languages every day. As an engineer on the App Foundation team, you would be critical to the continued success and broad appeal of Airbnb.</p>\n<p>In this role, you will have an opportunity to:</p>\n<p>Work collaboratively in cross-functional teams with design, product and data science partners, to define and ship impactful features.</p>\n<p>Propose architectural patterns of a high-scale applications, such as well-designed APIs, data pipelines and efficient algorithms</p>\n<p>Writing unit and integration tests, reviewing other’s code</p>\n<p>Review service-level performance metrics and triage anomalies or regressions.</p>\n<p>Profile and debug performance issues across service boundaries and implement fixes (e.g., query optimization, caching strategies, concurrency improvements, payload reduction).</p>\n<p>Partner with engineers across teams to improve critical request flows - aligning on SLOs, rollout plans, and measurement strategies.</p>\n<p>Participate in code reviews and architecture discussions with a performance lens, helping teams ship changes safely and efficiently</p>\n<p>Document learnings and create playbooks so performance improvements scale beyond a single service or team.</p>\n<p>Your Expertise:</p>\n<p>5+ years of software development experience</p>\n<p>Strong expertise in one or more back-end server languages (Java/Kotlin/C++/etc.)</p>\n<p>Experience in building and scaling high-quality and high-traffic products (or systems) in a distributed manner.</p>\n<p>Deep backend expertise, including proficiency with databases, cloud technologies, and asynchronous messaging systems.</p>\n<p>End-to-end ownership mentality that transcends team boundaries.</p>\n<p>Passion for building strong collaborative relationships with other engineering &amp; product partners</p>\n<p>Want to tackle projects with large open-ended scope and drive significant business impact</p>\n<p>Able to self-serve on data analysis and make data driven decisions</p>\n<p>Rigorous attention to detail and the ability to tackle ambiguous problems</p>\n<p>Embrace the ever-changing culture, prioritize breadth over depth but can still go in-depth when needed.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_36c9df64-9b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7717198","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191,000-$223,000 USD","x-skills-required":["Java","Kotlin","C++","databases","cloud technologies","asynchronous messaging systems","APIs","data pipelines","efficient algorithms","unit testing","integration testing","code review","architecture discussion"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:53.762Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kotlin, C++, databases, cloud technologies, asynchronous messaging systems, APIs, data pipelines, efficient algorithms, unit testing, integration testing, code review, architecture discussion","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":223000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a2686d2-290"},"title":"Staff Analytics Engineer","description":"<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>\n<p>Our Data Science and Analytics team seeks to empower R&amp;D to make data-backed decisions that accelerate innovation and improve product performance. You will work closely within our team and across Product &amp; Engineering to design and maintain a robust analytics data layer that enables trusted reporting on R&amp;D metrics.</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Design and implement a formal analytics data layer using AWS Glue, Presto, and LookML</li>\n<li>Collaborate within the Data Science &amp; Analytics team and across Product &amp; Engineering to define, document, and maintain alignment on metric definition and data lineage</li>\n<li>Develop and maintain automated data reconciliation and quality checks to proactively identify and resolve discrepancies, ensuring accuracy and consistency of critical reports and dashboards</li>\n<li>Lead investigations into complex data anomalies, conduct root cause analysis, and communicate findings and solutions effectively to both technical and non-technical audiences</li>\n<li>Mentor and guide members of the data science and analytics team, establishing and enforcing best practices around data modeling, testing, documentation, and code review</li>\n</ul>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a2686d2-290","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7551660","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$155,520 - $194,400 (Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.)\n$164,640 - $205,800 (New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area))\n$182,960 - $228,700 (San Francisco Bay area, California)","x-skills-required":["AWS Glue","Presto","LookML","SQL","data modeling","data pipelines","data reconciliation","data quality checks"],"x-skills-preferred":["Python","distributed computing technologies","Hive","Spark","dashboarding tools","Looker","Tableau"],"datePosted":"2026-04-18T15:43:20.940Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS Glue, Presto, LookML, SQL, data modeling, data pipelines, data reconciliation, data quality checks, Python, distributed computing technologies, Hive, Spark, dashboarding tools, Looker, Tableau","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155520,"maxValue":228700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22596c5b-465"},"title":"Forward Deployed Data Engineer","description":"<p>We&#39;re looking for a Forward Deployed Data Engineer to join our team. As a Forward Deployed Data Engineer, you will be responsible for defining how our Forward Deployed Engineering function operates, including how engagements run, what the deliverables are, how the playbook works, what scales and what doesn&#39;t.</p>\n<p>You will embed directly with our strategic accounts, large enterprises with complex data needs, often in financial services, insurance, and other regulated industries. You will work alongside their teams to understand their go-to-market challenges, then design and deliver bespoke intelligence applications that combine our third-party data with the customer&#39;s first-party data to drive real business outcomes.</p>\n<p>You will own the engagement end-to-end: from discovery through deployment, from executive presentation through production code. You will work closely with our data and product teams to bring the full breadth of our data foundation to bear , company intelligence, contact data, buying signals, intent data, and specialized vertical datasets , assembled into purpose-built applications tailored to each customer&#39;s specific personas and workflows.</p>\n<p>You will have access to incredible data, powerful infrastructure, and our most important customer relationships. What you build with them , and how you build it , will define the model going forward.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build the FDE Playbook</li>\n<li>Document what works: onboarding sequences, deployment patterns, integration frameworks, success metrics</li>\n<li>Feed field learnings back to product, engineering, and data teams to inform product direction and dataset priorities</li>\n<li>Own Strategic Customer Engagements End-to-End</li>\n<li>Serve as the primary technical point of contact for assigned strategic accounts</li>\n<li>Run discovery sessions, scope use cases, design solutions, build applications, and deploy them in the customer&#39;s environment</li>\n<li>You own the outcome , not just the delivery</li>\n<li>Deliver Custom Intelligence Applications</li>\n<li>Build bespoke &#39;single pane of glass&#39; applications that unify our data with the customer&#39;s proprietary data</li>\n<li>These are purpose-built for specific personas and workflows , not configured off-the-shelf products</li>\n<li>Bridge Technical and Business Audiences</li>\n<li>Sit with the sales team</li>\n<li>Present to executive leadership</li>\n<li>Synthesize complex go-to-market data needs into clear, actionable proposals , then deliver the solution</li>\n<li>You&#39;re equally comfortable whiteboarding architecture with a data engineering team and presenting ROI to a CRO</li>\n<li>Drive Stickiness and Expansion</li>\n<li>Every application you build makes our data more deeply embedded in the customer&#39;s daily workflows</li>\n<li>Identify expansion opportunities as they emerge , new use cases, new personas, new datasets</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>High Ownership, High Ambiguity Tolerance</li>\n<li>This role doesn&#39;t exist yet at ZoomInfo. You take ownership of outcomes , not tasks , and you&#39;re comfortable making judgment calls with incomplete information, building process where there is none, and figuring things out as you go</li>\n<li>Strong Software Engineering Fundamentals</li>\n<li>You write production-quality code</li>\n<li>You&#39;re proficient in Python, SQL, and modern web frameworks</li>\n<li>You&#39;re comfortable with APIs, data pipelines, cloud platforms (AWS, GCP, or Azure), and building applications that real users depend on daily</li>\n<li>Familiarity with API tooling , GraphQL, REST, Postman, authentication patterns (JWT, OAuth) , is a plus; deep expertise isn&#39;t required, but you should be comfortable navigating and integrating against APIs quickly</li>\n<li>You work fluently in LLM-based development environments like Claude Code or Codex , these are core tools in how we build, not a nice-to-have</li>\n<li>Customer-Facing Communication</li>\n<li>You&#39;ve worked directly with customers in a technical capacity , solutions engineering, consulting, technical account management, or a previous FDE role</li>\n<li>You can synthesize complex data needs for an executive audience and discuss architecture with an engineering team in the same meeting</li>\n<li>You&#39;re comfortable navigating enterprise environments with competing stakeholders and priorities</li>\n<li>Go-to-Market Data Familiarity (Preferred)</li>\n<li>Experience working with B2B data, CRM systems, sales/marketing tech stacks, or similar go-to-market infrastructure</li>\n<li>You understand what it means to operationalize data in a revenue context</li>\n</ul>\n<p>Why This Role</p>\n<ul>\n<li>The mandate is clear. Driving data consumption and growth across ZoomInfo&#39;s strategic accounts is a top company priority</li>\n<li>We already have working prototypes, validated customer demand, and executive sponsorship</li>\n<li>The data team, product team, and infrastructure are in place</li>\n<li>What&#39;s needed is the person who executes</li>\n<li>Enterprise customer access. ZoomInfo&#39;s customer base includes market-leading GTM organizations at some of the largest enterprises in the world</li>\n<li>You&#39;ll be embedded with these teams, solving real problems with meaningful budgets and complex data needs</li>\n<li>Best-in-class data and infrastructure. ZoomInfo&#39;s data foundation, GTM Data Store, query infrastructure, and vertical dataset catalog give you the raw materials to build custom intelligence applications that aren&#39;t possible anywhere else</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22596c5b-465","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8498600002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$171,500-$269,500 USD","x-skills-required":["Python","SQL","Modern web frameworks","APIs","Data pipelines","Cloud platforms (AWS, GCP, or Azure)","LLM-based development environments (Claude Code or Codex)"],"x-skills-preferred":["GraphQL","REST","Postman","Authentication patterns (JWT, OAuth)","B2B data","CRM systems","Sales/marketing tech stacks"],"datePosted":"2026-04-18T15:43:09.819Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Modern web frameworks, APIs, Data pipelines, Cloud platforms (AWS, GCP, or Azure), LLM-based development environments (Claude Code or Codex), GraphQL, REST, Postman, Authentication patterns (JWT, OAuth), B2B data, CRM systems, Sales/marketing tech stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":171500,"maxValue":269500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0594b3f5-9a0"},"title":"Software Engineer","description":"<p>Join the Voice &amp; Video Postflight team as Twilio&#39;s next Senior Software Engineer.</p>\n<p>This position is needed to build and evolve next-generation distributed systems that empower our customers through high-performance APIs. You will be tasked with solving the complex challenges inherent in supporting the massive scale of Twilio Voice, ensuring our infrastructure remains robust as we expand our capabilities.</p>\n<p>As a Software Engineer, you will focus on the intersection of large-scale API development and advanced data systems. You will work on designing and implementing low-latency, highly scalable architectures that leverage modern database technologies to provide customers with seamless access to large-scale data.</p>\n<p>Responsibilities:</p>\n<p>Architect and implement next-generation distributed systems capable of handling the immense throughput and concurrency requirements of Twilio Voice.</p>\n<p>Design low-latency, high-scale APIs that empower customers with real-time access to their data and communications infrastructure.</p>\n<p>Optimize and manage distributed database environments, ensuring high availability and performance across high-volume data stores.</p>\n<p>Own the full development lifecycle, from initial system design and prototyping to the continuous operation of 24x7 production services.</p>\n<p>Collaborate across engineering teams to solve &#39;hard&#39; distributed systems problems, ensuring our API layer is both resilient and developer-friendly.</p>\n<p>Qualifications:</p>\n<p>A Master&#39;s or Bachelor&#39;s degree and 5+ years of experience in software engineering, with a focus on backend or infrastructure systems.</p>\n<p>Expertise in Distributed Systems: A deep understanding of consistency models, partition tolerance, and the challenges of scaling stateful services.</p>\n<p>Core Languages: Proficiency in Java, Spring, Dropwizard and a strong grasp of building RESTful APIs at scale.</p>\n<p>Database Fundamentals: Practical experience working with and tuning PostgreSQL, Aurora or similar relational databases.</p>\n<p>Cloud Infrastructure: Familiarity with deploying and managing large-scale services on AWS or GCP.</p>\n<p>Operational Excellence: Comfortable operating in an agile environment with a &#39;you build it, you run it&#39; mentality.</p>\n<p>Desired:</p>\n<p>OLAP &amp; Big Data: Experience with ClickHouse or other column-oriented databases for high-performance analytical queries.</p>\n<p>Infrastructure as a code: Familiarity with tools such as Terraform, Harness for managing systems.</p>\n<p>Data Pipelines: Prior exposure to technologies like Kafka or Spark for moving and processing data between distributed systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0594b3f5-9a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7785202","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed Systems","Java","Spring","Dropwizard","PostgreSQL","Aurora","AWS","GCP","Operational Excellence"],"x-skills-preferred":["OLAP & Big Data","Infrastructure as a code","Data Pipelines"],"datePosted":"2026-04-18T15:43:04.531Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ireland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed Systems, Java, Spring, Dropwizard, PostgreSQL, Aurora, AWS, GCP, Operational Excellence, OLAP & Big Data, Infrastructure as a code, Data Pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64176983-af0"},"title":"Research Engineer, Reward Models Platform","description":"<p>You will work as a Research Engineer on Anthropic&#39;s Reward Models Platform. Your primary responsibility will be to design and build infrastructure that enables researchers to rapidly iterate on reward signals. This includes tools for rubric development, human feedback data analysis, and reward robustness evaluation. You will also develop systems for automated quality assessment of rewards, including detection of reward hacks and other pathologies. Additionally, you will create tooling that allows researchers to easily compare different reward methodologies and understand their effects. You will collaborate with researchers to translate science requirements into platform capabilities and optimize existing systems for performance, reliability, and ease of use.</p>\n<p>You will have the opportunity to contribute directly to research projects yourself and have a direct impact on our ability to scale reward development across domains. You will work closely with researchers and translate ambiguous requirements into well-scoped engineering projects.</p>\n<p>To be successful in this role, you should have prior research experience and be excited to work closely with researchers. You should have strong Python skills and experience with ML workflows and data pipelines, and building related infrastructure/tooling/platforms. You should be comfortable working across the stack, ranging from data pipelines to experiment tracking to user-facing tooling.</p>\n<p>Strong candidates may also have experience with ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, experiment tracking, evaluation frameworks, or MLOps tooling. They may also have experience with large-scale data processing, Kubernetes, distributed systems, or cloud infrastructure, and familiarity with reinforcement learning or fine-tuning workflows.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64176983-af0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5024831008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$350,000-$500,000 USD","x-skills-required":["Python","ML workflows","data pipelines","infrastructure/tooling/platforms","rubric development","human feedback data analysis","reward robustness evaluation","automated quality assessment","reward hacks","pathologies","experiment tracking","evaluation frameworks","MLOps tooling"],"x-skills-preferred":["ML research","building internal tooling and platforms for ML researchers","data quality assessment and pipeline optimization","Kubernetes","distributed systems","cloud infrastructure","reinforcement learning","fine-tuning workflows"],"datePosted":"2026-04-18T15:42:43.065Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML workflows, data pipelines, infrastructure/tooling/platforms, rubric development, human feedback data analysis, reward robustness evaluation, automated quality assessment, reward hacks, pathologies, experiment tracking, evaluation frameworks, MLOps tooling, ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, Kubernetes, distributed systems, cloud infrastructure, reinforcement learning, fine-tuning workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b359ef2-6f8"},"title":"Machine Learning Systems Engineer, Research Tools","description":"<p>We are seeking an experienced Machine Learning Systems Engineer to join our Encodings and Tokenization team at Anthropic. This cross-functional role will be instrumental in developing and optimizing the encodings and tokenization systems used throughout our Finetuning workflows. As a bridge between our Pretraining and Finetuning teams, you&#39;ll build critical infrastructure that directly impacts how our models learn from and interpret data.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain tokenization systems used across Pretraining and Finetuning workflows</li>\n<li>Optimize encoding techniques to improve model training efficiency and performance</li>\n<li>Collaborate closely with research teams to understand their evolving needs around data representation</li>\n<li>Build infrastructure that enables researchers to experiment with novel tokenization approaches</li>\n<li>Implement systems for monitoring and debugging tokenization-related issues in the model training pipeline</li>\n<li>Create robust testing frameworks to validate tokenization systems across diverse languages and data types</li>\n<li>Identify and address bottlenecks in data processing pipelines related to tokenization</li>\n<li>Document systems thoroughly and communicate technical decisions clearly to stakeholders across teams</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have significant software engineering experience with demonstrated machine learning expertise</li>\n<li>Are comfortable navigating ambiguity and developing solutions in rapidly evolving research environments</li>\n<li>Can work independently while maintaining strong collaboration with cross-functional teams</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Have experience with machine learning systems, data pipelines, or ML infrastructure</li>\n<li>Are proficient in Python and familiar with modern ML development practices</li>\n<li>Have strong analytical skills and can evaluate the impact of engineering changes on research outcomes</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Enjoy pair programming (we love to pair!)</li>\n<li>Care about the societal impacts of your work and are committed to developing AI responsibly</li>\n</ul>\n<p>Strong Candidates May Also Have Experience With:</p>\n<ul>\n<li>Working with machine learning data processing pipelines</li>\n<li>Building or optimizing data encodings for ML applications</li>\n<li>Implementing or working with BPE, WordPiece, or other tokenization algorithms</li>\n<li>Performance optimization of ML data processing systems</li>\n<li>Multi-language tokenization challenges and solutions</li>\n<li>Research environments where engineering directly enables scientific progress</li>\n<li>Distributed systems and parallel computing for ML workflows</li>\n<li>Large language models or other transformer-based architectures (not required)</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b359ef2-6f8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4952079008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Machine Learning","Software Engineering","Python","Data Pipelines","ML Infrastructure"],"x-skills-preferred":["BPE","WordPiece","Tokenization Algorithms","Performance Optimization","Distributed Systems"],"datePosted":"2026-04-18T15:42:42.125Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Software Engineering, Python, Data Pipelines, ML Infrastructure, BPE, WordPiece, Tokenization Algorithms, Performance Optimization, Distributed Systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eaf69b14-56c"},"title":"Senior Data Scientist, Platform - Identity/Algorithms","description":"<p>Job Title: Senior Data Scientist, Platform - Identity/Algorithms</p>\n<p>We are seeking a highly motivated and talented Full-Stack Data Scientist to join our Identity Data Science team. As a Senior Data Scientist, you will play a key role in designing and implementing cutting-edge identity verification systems and defending against emerging threats.</p>\n<p>The Ideal Candidate:</p>\n<ul>\n<li>Has a strong background in machine learning and statistical modeling</li>\n<li>Is experienced in working with large datasets and developing scalable data pipelines</li>\n<li>Has excellent communication and collaboration skills</li>\n<li>Is passionate about using data to drive business decisions</li>\n</ul>\n<p>Responsibilities:</p>\n<ul>\n<li>Improve on industry standards in identity verification by leveraging biometrics, NFC chips, Apple/Google integrations, and other advancements in identity verification technologies</li>\n<li>Build high-performing statistical models for detecting identity fraud, such as computer vision models for identifying fake or tampered images, LLMs for surfacing suspicious user account attributes, or graph-based models for uncovering hidden clusters of bad actors</li>\n<li>Automate and optimize human-in-the-loop ML processes for classifying fraud and generating other labels of interest for model training and evaluation</li>\n<li>Deploy a real-time anomaly detection system for quickly identifying emerging threats across regions, cohorts, and platforms</li>\n<li>Design intelligent sampling jobs for estimating rare events prevalence and other hard-to-measure metrics like recall and false positive/negative rates</li>\n</ul>\n<p>A Typical Day:</p>\n<ul>\n<li>AI/ML: Build and deploy production AI/ML models for detecting identity fraud and improving Airbnb’s identity verification systems (feature engineering, model development + evaluation, threshold selection, error analysis, model lifecycle management)</li>\n<li>Inference: Conduct experiments and lead quantitative analyses for measuring impact, surfacing critical gaps, and identifying opportunities for improvement</li>\n<li>Optimization: Develop methodologies and frameworks for analyzing the tradeoffs associated with new interventions and propose strategies for optimizing impact</li>\n<li>Communication &amp; Collaboration: Deliver robust research reports with effective data visualizations, clear storytelling, and bullet-proof accuracy to drive forward impact in collaboration with cross-functional partners in product, engineering, and operations</li>\n<li>Empowerment: Think strategically about opportunities to improve and scale our identity verification processes and defenses</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>5+ years of industry experience in a quantitative analysis role with a Master’s degree in a quantitative field (computer science, statistics, economics, etc.), or 2+ years of experience with a Ph.D.</li>\n<li>State-of-the-art knowledge of AI/ML models</li>\n<li>Strong knowledge of causal inference</li>\n<li>Skilled in statistical programming (Python or R) and database usage (SQL)</li>\n<li>Proven ability to communicate clearly and effectively to audiences of varying technical levels</li>\n<li>Ability to translate complex findings and results into compelling narratives that drive impact</li>\n<li>Excellent project management, communication, and collaboration skills</li>\n<li>Trust &amp; Safety experience is a plus</li>\n</ul>\n<p>Your Location:</p>\n<p>This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states.</p>\n<p>Our Commitment To Inclusion &amp; Belonging:</p>\n<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply. We strive to also provide a disability inclusive application and interview process. If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com.</p>\n<p>How We&#39;ll Take Care of You:</p>\n<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Pay Range: $177,000-$208,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eaf69b14-56c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7526372","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$177,000-$208,000 USD","x-skills-required":["Machine Learning","Statistical Modeling","Data Pipelines","Communication","Collaboration","Python","R","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:30.103Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Statistical Modeling, Data Pipelines, Communication, Collaboration, Python, R, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":177000,"maxValue":208000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc9d92de-913"},"title":"Research Engineer / Research Scientist, Vision","description":"<p>We&#39;re looking for research engineers with a strong computer vision background to work on research, development, and evaluation for state-of-the-art Claude models. In this role, you&#39;ll run experiments to evaluate architectural variants, data strategies, and SL and RL techniques to improve Claude&#39;s vision. You&#39;ll also develop and test tools, skills, and agentic infrastructure that enable Claude to reason over visual inputs. Additionally, you&#39;ll create evaluations and benchmarks that measure progress on multimodal capabilities across training and deployment.</p>\n<p>As a research engineer, you&#39;ll partner with the product org to ensure that the vision improvements you deliver impact Claude&#39;s performance on real-world tasks. You&#39;ll also work with our product org to find solutions to our most vexing API customer challenges related to vision and spatial reasoning.</p>\n<p>Strong candidates may also have experience with large-scale pretraining, SL, and RL on language models, deep learning research on images, video, or other modalities, developing complex agentic systems using LLMs, high-performance ML systems (GPUs, TPUs, JAX, PyTorch), and large-scale ETL and data pipeline development.</p>\n<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc9d92de-913","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5074217008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["computer vision","ML","software engineering","large vision language models","synthetic and real-world visual training datasets","systematic prompting, finetuning, or evaluation"],"x-skills-preferred":["large-scale pretraining","SL","RL","deep learning research","agentic systems","high-performance ML systems","ETL and data pipeline development"],"datePosted":"2026-04-18T15:42:18.530Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, NY; San Francisco, CA; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"computer vision, ML, software engineering, large vision language models, synthetic and real-world visual training datasets, systematic prompting, finetuning, or evaluation, large-scale pretraining, SL, RL, deep learning research, agentic systems, high-performance ML systems, ETL and data pipeline development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11099543-51f"},"title":"Software Engineer L3 Phone Numbers","description":"<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>\n<p>Join the team as Twilio&#39;s next Software Engineer L3. This position is that of a Senior Software Engineer to join Twilio&#39;s Messaging Compliance Onboarding team. Programmable Messaging is Twilio&#39;s biggest product. To keep pace with the evolving messaging compliance ecosystem, we need strong engineers that can create innovative solutions to ensure compliance with Twilio partners.</p>\n<p>In this role, you&#39;ll build and maintain multiple compliance program workflows, carrier/ecosystem integrations and customer interactions in the Compliance domain. You will design and develop elegant and scalable solutions across a wide variety of compliance program types including frontend UI experiences and backend APIs, that are highly available and responsive.</p>\n<p>You will work through ambiguity, deliver quickly and with a high quality. Build towards achieving the next generation of architecture vision that empowers expansion of Compliance programs. Interact cross functionally across engineering teams within Twilio to align and build the product and architecture vision.</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p>We are always looking for people who will bring something new to the table!</p>\n<p>*Required:</p>\n<ul>\n<li>5+ years of experience and a strong fundamental knowledge of software development using JVM languages.</li>\n</ul>\n<ul>\n<li>Experience building web services incorporating best practices for external systems integration, including defensive and hardened approaches to mitigate downstream issues.</li>\n</ul>\n<ul>\n<li>Experience working with highly scalable APIs, high volume data pipelines and large distributed systems.</li>\n</ul>\n<ul>\n<li>Maintaining and operating cloud services.</li>\n</ul>\n<ul>\n<li>An unwillingness to settle for &#39;good enough&#39;, instead staying focused on longevity through well-tested code and continuous improvement.</li>\n</ul>\n<ul>\n<li>Demonstrated commitment to seeking diverse viewpoints and acting with intention to create an inclusive team environment.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills. Ability to write down and present designs and decisions throughout the development lifecycle, collaborating with engineering and non-engineering roles.</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>5+ years of Engineering experience, developing and maintaining high traffic services.</li>\n</ul>\n<ul>\n<li>Familiarity with DynamoDB, SQS, and data integration services like AWS glue</li>\n</ul>\n<ul>\n<li>Familiarity with LLMs, prompt optimizations to improve model accuracy, setting up evaluation pipelines</li>\n</ul>\n<ul>\n<li>Familiarity with Kubernetes, Temporal or similar workflow orchestration</li>\n</ul>\n<ul>\n<li>Experience working with frontend libraries like React or similar.</li>\n</ul>\n<p>Compensation:</p>\n<p>*Please note this role is open to candidates outside of California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Vermont, Washington D.C., and Washington State.</p>\n<p>The estimated pay ranges for this role are as follows:</p>\n<ul>\n<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.: $138,700 - $173,400</li>\n</ul>\n<ul>\n<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $146,800 - $183,600</li>\n</ul>\n<ul>\n<li>Based in the San Francisco Bay area, California: $163,100 - $203,900.</li>\n</ul>\n<p>This role may be eligible to participate in Twilio&#39;s equity plan and corporate bonus plan. All roles are generally eligible for the following benefits: health care insurance, 401(k) retirement account, paid sick time, paid personal time off, paid parental leave.</p>\n<p>Applications for this role are intended to be accepted until 4/10/2026, but may change based on business needs.</p>\n<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>\n<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>\n<p>Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11099543-51f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7724877","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software development using JVM languages","web services","external systems integration","highly scalable APIs","high volume data pipelines","large distributed systems","cloud services","well-tested code","continuous improvement","inclusive team environment","written and verbal communication skills"],"x-skills-preferred":["DynamoDB","SQS","AWS glue","LLMs","prompt optimizations","evaluation pipelines","Kubernetes","Temporal","workflow orchestration","React"],"datePosted":"2026-04-18T15:41:58.222Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development using JVM languages, web services, external systems integration, highly scalable APIs, high volume data pipelines, large distributed systems, cloud services, well-tested code, continuous improvement, inclusive team environment, written and verbal communication skills, DynamoDB, SQS, AWS glue, LLMs, prompt optimizations, evaluation pipelines, Kubernetes, Temporal, workflow orchestration, React"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18ae1499-b22"},"title":"Research Engineer, Discovery","description":"<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>\n<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>\n<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>\n<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>\n<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>\n<li>Develop large scale data pipelines to handle advanced language model training requirements</li>\n<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>\n<li>Are a strong communicator and enjoy working collaboratively</li>\n<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>\n<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>\n<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>\n<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>\n<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>\n<li>Have experience collaborating with other researchers to scale experimental ideas</li>\n<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>\n<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>\n<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>\n<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>\n<li>Familiarity with VM and container orchestration</li>\n<li>Experience with workflow orchestration tools and experiment management systems</li>\n<li>History working with large scale reinforcement learning</li>\n<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>\n</ul>\n<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18ae1499-b22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4669581008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["large-scale distributed systems","containerization technologies (Docker, Kubernetes)","performance optimization techniques","system architectures for high-throughput ML workloads","data pipelines","distributed storage systems","ML frameworks (PyTorch, JAX, etc.)","GPU/TPU architectures","cloud platforms (AWS, GCP)","VM and container orchestration","workflow orchestration tools","experiment management systems","reinforcement learning","large scale data pipelines (Beam, Spark, Dask, …)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:42.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_faa865dc-a1d"},"title":"Senior Data Engineer, BizTech","description":"<p>We&#39;re seeking a hands-on expert to provide technical leadership in addressing BizTech&#39;s diverse data engineering needs and driving long-term strategies and best practices.</p>\n<p>As a Senior Data Engineer, you&#39;ll lead the design, implementation, and testing of data systems, from architecture to production. You&#39;ll build batch and real-time data systems that support business needs and critical products, ensuring data systems&#39; quality, performance, and stability through rigorous monitoring and quality assurance practices.</p>\n<p>You&#39;ll collaborate with cross-functional teams, including product managers, data scientists, and engineers, to develop scalable systems and drive data-driven decisions. You&#39;ll maintain strong partnerships with backend, data science, and machine learning teams to ensure seamless integration of data systems.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the design, implementation, and testing of data systems, from architecture to production</li>\n<li>Building batch and real-time data systems that support business needs and critical products</li>\n<li>Ensuring data systems&#39; quality, performance, and stability through rigorous monitoring and quality assurance practices</li>\n<li>Collaborating with cross-functional teams to develop scalable systems and drive data-driven decisions</li>\n<li>Maintaining strong partnerships with backend, data science, and machine learning teams to ensure seamless integration of data systems</li>\n</ul>\n<p>We&#39;re looking for someone with 9+ years of relevant experience, a Bachelor&#39;s/Master&#39;s degree in CS/EE, and extensive experience in designing, building, and operating distributed data platforms. You should be proficient in Java, Scala, or Python, with strong skills in data processing and SQL querying. Proven track record of designing and optimizing batch and real-time data pipelines is a must.</p>\n<p>In addition to technical expertise, we&#39;re looking for someone with excellent written and verbal communication skills, with the ability to influence stakeholders and convey complex technical concepts. You should be a strong leader and mentor, with experience guiding teams on best practices and technical strategies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_faa865dc-a1d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7640881","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","Python","data processing","SQL querying","distributed data platforms","batch and real-time data pipelines"],"x-skills-preferred":["machine learning","data science","backend development"],"datePosted":"2026-04-18T15:40:41.162Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Python, data processing, SQL querying, distributed data platforms, batch and real-time data pipelines, machine learning, data science, backend development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f904cf7-7bd"},"title":"Data Analyst II","description":"<p>Join us at Brex, the intelligent finance platform that empowers companies to spend smarter and move faster in over 200 markets. As a Data Analyst II, you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>As a member of our Data organization, you will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses. This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</li>\n<li>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</li>\n<li>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</li>\n<li>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</li>\n<li>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</li>\n<li>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</li>\n<li>Contribute to the automation of recurring analyses and reporting workflows using Python.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience in data analytics or a related role in a professional setting.</li>\n<li>3+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</li>\n<li>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</li>\n<li>Proficiency in Python for data analysis, automation, and scripting (Pandas, NumPy, and similar libraries).</li>\n<li>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</li>\n<li>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</li>\n<li>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</li>\n<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>\n</ul>\n<p>Bonus points:</p>\n<ul>\n<li>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</li>\n<li>Familiarity with dbt for data modeling and transformation.</li>\n<li>Exposure to data pipeline orchestration tools (e.g., Airflow).</li>\n<li>Experience in fintech, financial services, or payments.</li>\n<li>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f904cf7-7bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex LLC","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463703002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech, financial services, or payments"],"datePosted":"2026-04-18T15:39:28.984Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"São Paulo, São Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, financial services, or payments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5eb1737d-7a1"},"title":"GRC Engineering Manager","description":"<p>We are seeking a GRC Engineering Manager to join our GRC organization and build the technical foundation for how we scale our risk and compliance programs.</p>\n<p>In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems. This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic.</p>\n<p>The core challenge is a data problem: compliance information lives across dozens of systems,cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories,and your job is to design systems that bring it together, normalize it, and make it actionable.</p>\n<p>Success in this role comes from understanding how systems connect and how data flows between them, not from writing code yourself. At Anthropic, you&#39;ll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort.</p>\n<p>You&#39;ll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments. Working closely with Security, IT, and Engineering teams, you&#39;ll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn&#39;t otherwise be possible.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the team that establishes foundational GRC processes and architecture.</li>\n<li>Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.</li>\n<li>Build data pipelines that aggregate risk, control, and asset information from across our technology stack.</li>\n<li>Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.</li>\n<li>Translate written policies and compliance requirements into policy-as-code,working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.</li>\n<li>Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.</li>\n<li>Design and deploy agentic AI workflows that extend team capacity, using Claude to serve as a virtual GRC analyst to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.</li>\n<li>Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines,working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.</li>\n<li>Build and lead an AI-forward GRC engineering function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>12+ years of total experience and 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.</li>\n<li>A systems thinker first. You understand how complex environments work: how data flows between systems, where integration points exist, what breaks when systems don&#39;t talk to each other.</li>\n<li>5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation.</li>\n<li>A relentless focus on data integration: you understand how to pull data from multiple sources, normalize it, join it meaningfully, and surface insights.</li>\n<li>Strong analytical and problem-solving skills with attention to detail necessary for compliance work, balanced with pragmatism about risk-based prioritization in fast-paced environments.</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Experience designing or implementing AI-powered automation, agentic workflows, or LLM-based tooling in operational contexts.</li>\n<li>Experience with GRC platforms such as ServiceNow GRC, Vanta, Drata, OneTrust, RSA Archer, or similar tools including configuration, customization, and integration capabilities.</li>\n<li>Familiarity with scripting languages (Python or similar) for automation tasks, API interactions, and data transformation.</li>\n<li>Prior experience in high-growth startup environments demonstrating ability to build scalable processes and adapt quickly to changing requirements and priorities.</li>\n<li>Familiarity with Infrastructure as Code tools (Terraform, CloudFormation, Ansible) and DevSecOps practices including CI/CD pipeline integration and policy-as-code implementations.</li>\n<li>Familiarity with cloud platforms (AWS, GCP, Azure) and an understanding of how compliance-relevant data can be extracted from their APIs and logging systems.</li>\n</ul>\n<p><strong>Deadline to Apply:</strong> None, applications will be received on a rolling basis.</p>\n<p><strong>Annual Compensation Range:</strong> $405,000-$405,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5eb1737d-7a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4980335008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$405,000 USD","x-skills-required":["GRC","Automation","Data Pipelines","System Integrations","Compliance","Risk Management","Audit Programs","Agentic AI","Policy-as-Code","DevSecOps","Cloud Platforms","APIs","Logging Systems"],"x-skills-preferred":["AI-Powered Automation","LLM-Based Tooling","GRC Platforms","Scripting Languages","Infrastructure as Code","CI/CD Pipeline Integration"],"datePosted":"2026-04-18T15:38:27.414Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GRC, Automation, Data Pipelines, System Integrations, Compliance, Risk Management, Audit Programs, Agentic AI, Policy-as-Code, DevSecOps, Cloud Platforms, APIs, Logging Systems, AI-Powered Automation, LLM-Based Tooling, GRC Platforms, Scripting Languages, Infrastructure as Code, CI/CD Pipeline Integration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e043c9b2-f13"},"title":"Engineering Manager, Safeguards Data Infrastructure","description":"<p>Job Title: Engineering Manager, Safeguards Data Infrastructure\\n\\nAbout the Role:\\n\\nAnthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly , and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organization access that data safely and ergonomically.\\n\\nAs Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements , and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies\\n\\nResponsibilities:\\n\\n<em> Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities\\n\\n</em> Own the strategy and execution for porting the safeguards offline data stack , including PII storage and tooling , across new cloud and deployment environments as Anthropic expands\\n\\n<em> Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints\\n\\n</em> Drive tooling and architecture decisions that maximize data retention within the bounds of our privacy and compliance requirements\\n\\n<em> Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)\\n\\n</em> Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts\\n\\n<em> Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination\\n\\n</em> Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals\\n\\n<em> Partner with recruiting to attract, hire, and retain strong engineering talent\\n\\nYou may be a good fit if you:\\n\\n</em> Have 4+ years of front-line engineering management experience\\n\\n<em> Have a track record of leading teams that build and operate data infrastructure at scale\\n\\n</em> Have hands-on software engineering experience as an individual contributor prior to moving into management\\n\\n<em> Have a strong understanding of data privacy principles, PII handling, and compliance frameworks\\n\\n</em> Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities\\n\\n<em> Have experience working cross-functionally across infrastructure, product, and compliance or security teams\\n\\n</em> Are clear and persuasive communicators, both in writing and in person\\n\\nStrong candidates may also:\\n\\n<em> Have experience with multi-cloud or multi-region data portability, particularly in regulated environments\\n\\n</em> Have built privacy-preserving data pipelines or interfaces for ML workloads\\n\\n<em> Have experience with enterprise data contracts or zero data retention architectures\\n\\n</em> Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data\\n\\n* Have a passion for building diverse and inclusive teams\\n\\nAnnual Compensation Range:\\n\\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\n\\nAnnual Salary:\\n\\n$405,000-$485,000 USD\\n\\nThe annual compensation range for this role is listed below.\\n\\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\n\\nAnnual Salary:\\n\\n£325,000-£390,000 GBP\\n\\nLogistics:\\n\\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\\n\\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\\n\\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\\n\\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\\n\\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\\n\\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.\\n\\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\\n\\nHow we&#39;re different:\\n\\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\\n\\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\\n\\nCome work with us!\\n\\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e043c9b2-f13","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5103078008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["data infrastructure","data privacy","compliance frameworks","software engineering","team leadership","cross-functional collaboration","communication skills"],"x-skills-preferred":["multi-cloud data portability","privacy-preserving data pipelines","enterprise data contracts","novel approaches to data processing","diverse and inclusive teams"],"datePosted":"2026-04-18T15:37:54.881Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK; New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, data privacy, compliance frameworks, software engineering, team leadership, cross-functional collaboration, communication skills, multi-cloud data portability, privacy-preserving data pipelines, enterprise data contracts, novel approaches to data processing, diverse and inclusive teams","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25c64073-7f8"},"title":"Data Center Portfolio Planning & Execution Lead","description":"<p>About the role</p>\n<p>Anthropic is rapidly scaling our compute infrastructure across a portfolio of datacenter builds with multiple developer, neocloud, and cloud partnerships. We&#39;re looking for a DC Portfolio Planning &amp; Execution Lead to drive the planning and framework that ensures every site moves smoothly from the front-end phases through design, construction, equipment delivery, commissioning, and operational readiness.</p>\n<p>This role owns the portfolio-level operating system: translating capacity supply pipeline into integrated project plans that span every phase of delivery, building the tooling and automation that runs it at scale, and maintaining Anthropic&#39;s datacenter capacity catalog , a lifecycle view of our fleet that supports both execution orchestration and steady-state capacity planning. You will build this function from the ground up.</p>\n<p>Responsibilities</p>\n<p>Portfolio schedule &amp; catalog</p>\n<p>Manage the integrated master plan for each site across the portfolio , stitching power ramp, design, construction, sourcing, deployment, and operations readiness into a single coordinated schedule with clear milestones and dependencies</p>\n<p>Develop and maintain Anthropic&#39;s datacenter catalog for deployed and in-progress capacity. Manage the portfolio-level view of physical infrastructure &amp; cluster interfaces across all sites and partners to enable planning decisions such as equipment fungibility, accelerator platforms, tech insertion, or workload allocation</p>\n<p>Stage gates &amp; execution tracking</p>\n<p>Define and run the stage gates and decision locks for cluster delivery , from lease execution to design lock through procurement, construction, equipment installation, commissioning, and handover</p>\n<p>Drive gate reviews, manage exceptions, and track the downstream impact of deviations across the portfolio</p>\n<p>Manage portfolio reviews and risk tracking for DC Infra leadership and Compute Supply</p>\n<p>Tooling &amp; process</p>\n<p>Develop tooling and automation to enable cross-functional planning flow-down from datacenter capacity availability dates</p>\n<p>Partner with Design, Supply Chain, Construction, and DC Ops program leads to drive cross-pillar process improvements as portfolio scales</p>\n<p>You may be a good fit if you</p>\n<p>Are familiar with the full datacenter buildout lifecycle: pipeline → design → sourcing → construction → Cx → deployment</p>\n<p>Have run integrated portfolio or master-schedule planning across a fleet of capital projects (datacenter, energy, fab, or similar) where multiple functional orgs each own a phase</p>\n<p>Have built a stage-gate or decision-lock system from scratch and gotten functional leads to adopt it</p>\n<p>Have re-architected a deployment or delivery process at scale and can point to the cycle-time or throughput result</p>\n<p>Build the tooling yourself using AI-assisted development , stand up planning dashboards, schedule automation, and data pipelines from Smartsheet/P6/partner systems</p>\n<p>Proactively surface schedule risk across functions , comfortable flagging a problem in someone else&#39;s domain before it becomes a slip</p>\n<p>Track record of driving outcomes through influence with cross-functional partners</p>\n<p>Strong candidates may also have</p>\n<p>Experience building a portfolio planning and execution function from scratch at a hyperscaler or large industrial owner</p>\n<p>Exposure to capacity planning or S&amp;OP processes that connect demand forecast to physical build</p>\n<p>Experience product-managing internal planning, workflow, or scheduling systems</p>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $365,000-$485,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25c64073-7f8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188939008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$485,000 USD","x-skills-required":["datacenter buildout lifecycle","portfolio or master-schedule planning","stage-gate or decision-lock system","AI-assisted development","schedule automation","data pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:37:19.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"datacenter buildout lifecycle, portfolio or master-schedule planning, stage-gate or decision-lock system, AI-assisted development, schedule automation, data pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_540ce49c-271"},"title":"Member of Technical Staff - Multimodal Understanding","description":"<p><strong>About the Role</strong></p>\n<p>You will join the multimodal team to push toward superhuman multimodal intelligence. Advance understanding and generation across modalities,image, video, audio, and text,spanning the full stack: data curation/acquisition, tokenizer training, large-scale pre-training, post-training/alignment, infrastructure/scaling, evaluation, tooling/demos, and end-to-end product experiences.</p>\n<p>Collaborate cross-functionally with pre-training, post-training, reasoning, data, applied, and product teams to deliver frontier capabilities in multimodal reasoning, world modeling, tool use, agentic behaviors, and interactive human-AI collaboration. Contribute to building models that can see, hear, reason about, and interact with the world in real time at unprecedented levels.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and optimize large-scale distributed systems for multimodal pre-training, post-training, inference, data processing, and tokenization at web/petabyte scale.</li>\n<li>Develop high-throughput pipelines for data acquisition, preprocessing, filtering, generation, decoding, loading, crawling, visualization, and management (images, videos, audio + text).</li>\n<li>Advance multimodal capabilities including spatial-temporal compression, cross-modal alignment, world modeling, reasoning, emergent abilities, audio/image/video understanding &amp; generation, real-time video processing, and noisy data handling.</li>\n<li>Drive data quality and studies: curation (human/synthetic), filtering techniques, analysis, and scalable pipelines to support trillion-parameter models.</li>\n<li>Create evaluation frameworks, internal benchmarks, reward models, and metrics that capture real-world usage, failure modes, interactive dynamics, and human-AI synergy.</li>\n<li>Innovate on algorithms, modeling approaches, hardware/software/algorithm co-design, and scaling paradigms for state-of-the-art performance.</li>\n<li>Build research tooling, user-friendly interfaces, prototypes/demos, full-stack applications, and enable rapid iteration based on feedback.</li>\n<li>Work across the stack (pre-training → SFT/RL/post-training) to enable reasoning, tool calling, agentic behaviors, orchestration, and seamless real-time interactions.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Hands-on experience with multimodal pre-training, post-training, or fine-tuning (vision, audio, video, or cross-modal).</li>\n<li>Expert-level proficiency in Python (core language), with strong experience in at least one of: JAX / PyTorch / XLA.</li>\n<li>Proven track record building or optimizing large-scale distributed ML systems (training/inference optimization, GPU utilization, multi-GPU/TPU setups, hardware co-design).</li>\n<li>Deep experience designing and running data pipelines at scale: curation, filtering, generation, quality studies, especially for noisy/real-world multimodal data.</li>\n<li>Strong fundamentals in evaluation design, benchmarks, reward modeling, or RL techniques (particularly for interactive/agentic behaviors).</li>\n<li>Proactive self-starter who thrives in high-intensity environments and is passionate about pushing multimodal AI frontiers.</li>\n<li>Willingness to own end-to-end initiatives and do whatever it takes to deliver breakthrough user experiences.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience leading major improvements in model capabilities through better data, modeling, algorithms, or scaling.</li>\n<li>Familiarity with state-of-the-art in multimodal LLMs, scaling laws, tokenizers, compression techniques, reasoning, or agentic systems.</li>\n<li>Proficiency in Rust and/or C++ for performance-critical components.</li>\n<li>Hands-on work with large-scale orchestration tools such as Spark, Ray, or Kubernetes.</li>\n<li>Background building full-stack tooling: performant interfaces, real-time research demos/apps, or end-to-end product ownership.</li>\n<li>Passion for end-to-end user experience in interactive, real-time multimodal AI systems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_540ce49c-271","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5111374007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Multimodal pre-training","Post-training","Fine-tuning","Python","JAX","PyTorch","XLA","Large-scale distributed ML systems","Data pipelines","Evaluation design","Benchmarks","Reward modeling","RL techniques"],"x-skills-preferred":["State-of-the-art in multimodal LLMs","Scaling laws","Tokenizers","Compression techniques","Reasoning","Agentic systems","Rust","C++","Spark","Ray","Kubernetes","Full-stack tooling"],"datePosted":"2026-04-18T15:23:05.119Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Multimodal pre-training, Post-training, Fine-tuning, Python, JAX, PyTorch, XLA, Large-scale distributed ML systems, Data pipelines, Evaluation design, Benchmarks, Reward modeling, RL techniques, State-of-the-art in multimodal LLMs, Scaling laws, Tokenizers, Compression techniques, Reasoning, Agentic systems, Rust, C++, Spark, Ray, Kubernetes, Full-stack tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_facf5d80-7bd"},"title":"Solutions Engineer, Delivery & Automation","description":"<p>We&#39;re looking for a Solutions Engineer who gets energized by solving gnarly technical problems and making customers wildly successful. As the technical quarterback for new customer onboardings, you&#39;ll translate their vision into working integrations, navigate the chaos of healthcare data standards, and ensure they extract real value from day one.</p>\n<p>Key responsibilities:</p>\n<p>Own the technical journey - Lead end-to-end onboarding for new customers,from authentication setup to data mart configuration</p>\n<p>Integrate customer systems with Zus (APIs, SFTP, HL7, FHIR,the whole interoperability stack)</p>\n<p>Translate messy business requirements into clean technical architectures</p>\n<p>Build and maintain automated workflows that make implementations faster and more reliable</p>\n<p>Drive customer success through technical excellence - Be the trusted technical advisor customers call when things get complicated</p>\n<p>Run technical deep dives and implementation reviews that actually move the needle</p>\n<p>Identify integration risks before they become blockers and solve them proactively</p>\n<p>Train customers on best practices so they become power users, not support tickets</p>\n<p>Innovate on process - Use AI tools (LLMs, automation platforms, scripting) to eliminate manual work and scale your impact</p>\n<p>Build templates, scripts, and tooling that make the 10th implementation faster than the 1st</p>\n<p>Document learnings and create repeatable playbooks through automation that make the whole team better</p>\n<p>Collaborate with R&amp;D - Partner closely with Product and Engineering to surface integration challenges and opportunities for platform improvement</p>\n<p>Translate real-world customer integration patterns into product feedback and roadmap insights</p>\n<p>Collaborate with R&amp;D teams on emerging capabilities around AI, data pipelines, and developer tooling</p>\n<p>Act as the voice of the customer when identifying opportunities to improve developer experience and reduce integration friction</p>\n<p>You&#39;ll enjoy solving messy integration challenges, building automation that eliminates manual work, and partnering closely with Product and Engineering to continuously improve the platform.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_facf5d80-7bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zus","sameAs":"https://zus.com/","logo":"https://logos.yubhub.co/zus.com.png"},"x-apply-url":"https://jobs.lever.co/zushealth/fbe45c72-4269-4c7f-b88c-6df3349c2479","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$125,000-165,000 per year","x-skills-required":["healthcare data standards (FHIR, HL7, CCD)","major EMRs (Epic, Cerner, athenahealth)","API and data pipeline experience (ETL, REST APIs, JSON, CSV ingestion)","data platforms (Snowflake, SQL databases) including schema design and query optimization","Python scripting skills and SQL fluency","secure environments and compliance (HIPAA, SOC2)"],"x-skills-preferred":["AI tools (LLMs, automation platforms, scripting)","data pipelines","developer tooling"],"datePosted":"2026-04-17T13:12:29.884Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"healthcare data standards (FHIR, HL7, CCD), major EMRs (Epic, Cerner, athenahealth), API and data pipeline experience (ETL, REST APIs, JSON, CSV ingestion), data platforms (Snowflake, SQL databases) including schema design and query optimization, Python scripting skills and SQL fluency, secure environments and compliance (HIPAA, SOC2), AI tools (LLMs, automation platforms, scripting), data pipelines, developer tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":125000,"maxValue":165000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_211bf97f-a24"},"title":"Software Engineer, Payment Operations","description":"<p>As a Software Engineer on the Payment Operations team, you will be responsible for the execution layer that ensures every dollar on Wingspan&#39;s platform is accounted for, reconciled, and moved accurately on time.</p>\n<p>This role reports to the Head of Payments &amp; Compliance Operations and is based in Warsaw, Poland, with a remote work model.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and ship internal systems and automation that eliminate entire categories of operational toil, owning every problem end-to-end from initial diagnosis to permanent fix</li>\n<li>Build and maintain reconciliation infrastructure that keeps Wingspan&#39;s ledger, bank records, and platform transaction data in continuous alignment, automatically and at scale</li>\n<li>Develop monitoring and alerting systems that surface funding health issues and payment anomalies in real time, ensuring problems are caught and resolved before they ever reach a customer</li>\n<li>Collaborate with Engineering, Product, and Finance to identify recurring operational patterns and translate them into platform-level improvements that raise the reliability ceiling for the entire system</li>\n<li>Contribute to the growth of our engineering culture by sharing knowledge, participating in code reviews, and proactively identifying opportunities to improve how the team builds, observes, and automates</li>\n</ul>\n<p>Qualifications &amp; Requirements:</p>\n<ul>\n<li>3+ years of experience in a software engineering or engineering-adjacent role with exposure to payment systems, backend services, or data pipelines</li>\n<li>Strong SQL skills, comfortable writing standalone scripts and using AI tools such as Claude Code, Open AI, etc</li>\n<li>Familiarity with RESTful APIs and backend services, with Node.js and TypeScript experience as a plus</li>\n<li>High autonomy and high accountability, you thrive in fast-moving environments and default to action over escalation</li>\n</ul>\n<p>Compensation:</p>\n<p>We tailor compensation packages based on expertise, years of experience, certifications, and other relevant factors. Our comprehensive benefits and rewards are designed to help you thrive both professionally and personally.</p>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Unlimited vacation</li>\n<li>Competitive stock option package</li>\n<li>$300 one time WFH stipend</li>\n<li>Top of the line 14&quot; MacBook Pro</li>\n<li>Travel stipend for team off sites</li>\n<li>Medicover Sports Membership</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_211bf97f-a24","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Wingspan","sameAs":"https://www.wingspan.com/","logo":"https://logos.yubhub.co/wingspan.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/wingspan/jobs/7701241003","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Node.js","TypeScript","RESTful APIs","Backend services","Data pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-17T13:09:50.786Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Warsaw, Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Node.js, TypeScript, RESTful APIs, Backend services, Data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a67b196-237"},"title":"Ground Control Station (GCS) Software Engineer","description":"<p>We&#39;re looking for a Software Engineer to join our team developing next-generation Ground Control Station (GCS) applications for unmanned aerial systems (UAS). This role is ideal for engineers with a strong foundation in real-time, performance-sensitive software who also bring experience with modern web technologies.</p>\n<p>You&#39;ll work on mission-critical software that bridges responsive user interfaces with robust backend systems, enabling human operators to plan, command, and monitor autonomous assets with precision and reliability.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and optimize ground control software that enables low-latency communication with UAVs and other autonomous platforms.</li>\n</ul>\n<ul>\n<li>Build high-performance client and server applications that support telemetry processing, mission planning, and real-time control.</li>\n</ul>\n<ul>\n<li>Implement responsive user interfaces with React and TypeScript for operator workflows and visualization of spatial data, sensor feeds, and mission state.</li>\n</ul>\n<ul>\n<li>Collaborate closely with teams across autonomy, embedded systems, backend, and UX to deliver integrated, field-ready solutions.</li>\n</ul>\n<ul>\n<li>Contribute to architectural decisions and system designs that ensure responsiveness, scalability, and fault-tolerance.</li>\n</ul>\n<ul>\n<li>Lead development efforts on key features or subsystems, from early design through deployment and iteration.</li>\n</ul>\n<ul>\n<li>Write high-quality, well-tested code and participate in peer design/code reviews.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Senior: Bachelor&#39;s degree with 5+ years of relevant experience, or Masters with 4+ years, or PhD with 2+ years.</li>\n</ul>\n<ul>\n<li>Staff: Bachelor&#39;s degree with 7+ years of relevant experience, or Masters with 6+ years, or PhD with 4+ years.</li>\n</ul>\n<ul>\n<li>Senior Staff: Bachelor&#39;s degree with 10+ years of relevant experience, or Masters with 9+ years, or PhD with 7+ years.</li>\n</ul>\n<ul>\n<li>Demonstrated experience building real-time or performance-sensitive applications,preferably for UAVs, robotics, autonomous vehicles, or simulation environments.</li>\n</ul>\n<ul>\n<li>Proficiency in a strongly typed programming language (e.g. C#, TypeScript, Java, C++) with exposure to lower-level systems or protocol integration.</li>\n</ul>\n<ul>\n<li>Experience with web technologies, especially React, TypeScript/JavaScript, and Node.js.</li>\n</ul>\n<ul>\n<li>Strong software engineering fundamentals including version control, testing, debugging, and performance profiling.</li>\n</ul>\n<ul>\n<li>Proven ability to deliver high-quality software as part of a collaborative engineering team.</li>\n</ul>\n<p><strong>Preferences:</strong></p>\n<ul>\n<li>Experience with GCS software, mission planning tools, or real-time visualization platforms.</li>\n</ul>\n<ul>\n<li>Familiarity with API-driven systems using REST or gRPC, and communication protocols like WebSocket or custom telemetry formats.</li>\n</ul>\n<ul>\n<li>Knowledge of standards such as STANAG 4586, Cursor on Target (CoT), or MAVLink.</li>\n</ul>\n<ul>\n<li>Familiarity with containerized environments (e.g., Docker, Kubernetes) and CI/CD practices.</li>\n</ul>\n<ul>\n<li>Exposure to distributed systems and cloud integration for telemetry data pipelines.</li>\n</ul>\n<ul>\n<li>Understanding of security best practices in control systems and networked applications.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a67b196-237","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/cf148e69-2ca0-4bfd-a4cc-af214bfcce8a","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$130,000 - $250,000 a year","x-skills-required":["real-time software development","performance-sensitive software","modern web technologies","strongly typed programming languages","lower-level systems or protocol integration","web technologies","React","TypeScript/JavaScript","Node.js","software engineering fundamentals","version control","testing","debugging","performance profiling"],"x-skills-preferred":["GCS software","mission planning tools","real-time visualization platforms","API-driven systems","REST or gRPC","WebSocket or custom telemetry formats","STANAG 4586","Cursor on Target (CoT)","MAVLink","containerized environments","Docker","Kubernetes","CI/CD practices","distributed systems","cloud integration","telemetry data pipelines","security best practices"],"datePosted":"2026-04-17T13:04:00.400Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dallas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"real-time software development, performance-sensitive software, modern web technologies, strongly typed programming languages, lower-level systems or protocol integration, web technologies, React, TypeScript/JavaScript, Node.js, software engineering fundamentals, version control, testing, debugging, performance profiling, GCS software, mission planning tools, real-time visualization platforms, API-driven systems, REST or gRPC, WebSocket or custom telemetry formats, STANAG 4586, Cursor on Target (CoT), MAVLink, containerized environments, Docker, Kubernetes, CI/CD practices, distributed systems, cloud integration, telemetry data pipelines, security best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5b743bb-d8f"},"title":"Product Manager, AI Platforms","description":"<p>The AI Platform Product Manager will drive the strategy and execution of Shield AI&#39;s next-generation autonomy intelligence stack. This PM owns the product vision and roadmap for the Hivemind AI Platform, ensuring we can manufacture, govern, and field advanced world models, robotics foundation models, and vision-language-action systems safely and at scale.</p>\n<p>This role sits at the intersection of AI/ML, autonomy, model lifecycle, infrastructure, and product strategy. The PM partners closely with engineering, AI research, Hivemind Solutions, and field teams to deliver the tooling that enables sovereign autonomy, AI Factories at the edge, and continuous learning,capabilities that are central to Shield AI&#39;s strategic direction.</p>\n<p>This is a high-impact role for an experienced product leader excited to define how foundation models are trained, validated, governed, and deployed across thousands of autonomous systems in highly contested environments.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>AI Model Development &amp; Training Platform</li>\n</ul>\n<p>Own the roadmap for foundation model training workflows, including dataset ingestion, curation, labeling, synthetic data generation, domain model training, and distillation pipelines. Define requirements for world models, robotics models, and VLA-based training, evaluation, and specialization. Lead the evolution of MLOps capabilities in Forge, including data lineage, experiment tracking, model versioning, and scalable evaluation suites.</p>\n<ul>\n<li>Data, Simulation &amp; Synthetic Data Factory</li>\n</ul>\n<p>Define product requirements for synthetic data generation, simulation-integrated data flywheels, and automated scenario generation. Partner with Digital Twin, Simulation, and autonomy teams to convert natural-language mission inputs into data needs, training procedures, and model variants.</p>\n<ul>\n<li>Safe Deployment &amp; Model Governance</li>\n</ul>\n<p>Lead the development of model governance and auditability tooling, including model cards, dataset rights, lineage tracking, safety gates, and compliance evidence. Build guardrails and workflows to safely deploy models onto edge hardware in disconnected, GPS- or comms-denied environments. Partner with Safety, Certification, Cyber, and Engineering teams to ensure traceability and evaluation pipelines meet operational and accreditation requirements.</p>\n<ul>\n<li>Edge Deployment &amp; AI Factory Integration</li>\n</ul>\n<p>Partner with Pilot, EdgeOS, and hardware teams to integrate foundation-model-based perception and reasoning into autonomy behaviors. Define requirements for distillation, quantization, and inference tooling as part of the “three-computer” development and deployment model. Ensure closed-loop workflows between cloud model training and edge-native execution.</p>\n<ul>\n<li>Cross-Functional Leadership</li>\n</ul>\n<p>Collaborate with Engineering, Research, Product, Customer Engagement, and Solutions teams to ensure model outputs meet mission and platform constraints. Translate advanced AI capabilities into intuitive workflows that platform OEMs and partner nations can use to build sovereign AI factories. Sequence foundational capabilities that unblock autonomy, simulation, and customer-facing product teams.</p>\n<ul>\n<li>User &amp; Customer Impact</li>\n</ul>\n<p>Develop deep empathy for ML engineers, autonomy developers, and Solutions engineers who rely on the platform. Capture operational data gaps, mission-driven model needs, and domain-specific specialization requirements. Lead demos and onboarding for model-development capabilities across internal and external teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5b743bb-d8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/7886f437-2d5e-4616-8dcb-3dc488f1f585","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000 - $290,000 a year","x-skills-required":["AI Model Development & Training Platform","Data, Simulation & Synthetic Data Factory","Safe Deployment & Model Governance","Edge Deployment & AI Factory Integration","Cross-Functional Leadership","User & Customer Impact","Strong engineering background","Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure","Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments","Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows"],"x-skills-preferred":["Experience working on autonomy, robotics, embedded AI, or mission-critical systems","Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures","Experience supporting defense, dual-use, or safety-critical AI systems","Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment)","Advanced degree in engineering, ML/AI, robotics, or a related field"],"datePosted":"2026-04-17T13:02:54.419Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Diego"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI Model Development & Training Platform, Data, Simulation & Synthetic Data Factory, Safe Deployment & Model Governance, Edge Deployment & AI Factory Integration, Cross-Functional Leadership, User & Customer Impact, Strong engineering background, Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure, Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments, Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows, Experience working on autonomy, robotics, embedded AI, or mission-critical systems, Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures, Experience supporting defense, dual-use, or safety-critical AI systems, Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment), Advanced degree in engineering, ML/AI, robotics, or a related field","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2041c850-8e6"},"title":"Lead FP&A Analyst","description":"<p>Hivemind Finance seeks a proactive, detail-oriented Lead FP&amp;A Analyst to join our dynamic Finance team. This role will be integral in forecasting orders, revenue, and cash flow, driving executive-level reporting, and providing analytical support for business unit budget tracking.</p>\n<p>The ideal candidate will bring strong experience in the U.S. defense sector, including familiarity with government contracting dynamics, while also operating effectively in a fast-paced, software-driven environment.</p>\n<p>Key responsibilities include:\nForecasting company orders, revenue, and cash flow with accuracy and insight, incorporating government contract structures, funding profiles, and award timing.\nPreparing and delivering executive-level reports, clearly articulating financial performance, program health, and strategic implications.\nSupporting preparation and review of executive and board-level materials, including program-level performance reporting.\nMonitoring and tracking budget performance at the Business Unit and program/contract level, highlighting key variances and actionable insights.\nPartnering closely with program finance, program managers, business development, and operational leaders to provide forward-looking insights on contract performance and pipeline conversion.\nSupporting financial planning processes tied to government contracts (e.g., cost-plus, fixed-price, T&amp;M) and commercial software revenue models (e.g., SaaS, licensing, and usage-based pricing).\nLeveraging FP&amp;A tools (e.g., Anaplan or similar platforms) to enhance forecasting, reporting, and scenario modeling.</p>\n<p>Required qualifications include:\n5+ years of FP&amp;A experience within the U.S. defense industry, aerospace &amp; defense, or government contracting environment.\nStrong understanding of government contract structures, including cost-plus, fixed-price, and time &amp; materials.\nFamiliarity with FAR (Federal Acquisition Regulations) and financial implications of compliance in forecasting and reporting.\nProven expertise in top-down and bottom-up forecasting, including enterprise budgeting, cost pool planning, and management of indirect rates and wrap rates (e.g., G&amp;A, overhead, fringe) within a program-based environment.\nExperience supporting program finance, contract performance analysis, and backlog/pipeline forecasting.\nKnowledge of ASC 606 revenue recognition principles and their application in forecasting and financial analysis.\nProficient in delivering executive management-level reporting and analysis.</p>\n<p>Preferred qualifications include:\nExperience operating in organizations at the intersection of defense and commercial technology/software.\nExposure to SaaS or software business models, including knowledge of revenue recognition in software business models (ASC 606 application in SaaS, licensing, and usage-based revenue).\nExperience with financial systems automation, data pipelines, or advanced FP&amp;A tooling.\nWorking knowledge or hands-on experience with automation technologies, machine learning, or agentic AI tools to enhance financial analysis, forecasting, reporting, and decision-making.\nCustomer-facing or operational experience (e.g., restaurants, retail, etc.).\nDemonstrated curiosity and openness to innovative, AI-enabled approaches that improve financial analysis, forecasting, reporting, and decision-making.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2041c850-8e6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/6224def5-5489-459e-a4bc-4a795d662762","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$110,000 - $160,000 a year","x-skills-required":["FP&A experience","Government contract structures","FAR (Federal Acquisition Regulations)","Top-down and bottom-up forecasting","Enterprise budgeting","Cost pool planning","Indirect rates and wrap rates","ASC 606 revenue recognition principles"],"x-skills-preferred":["SaaS or software business models","Financial systems automation","Data pipelines","Advanced FP&A tooling","Automation technologies","Machine learning","Agentic AI tools"],"datePosted":"2026-04-17T13:01:05.689Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Diego, California / Washington, DC / San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Defense","skills":"FP&A experience, Government contract structures, FAR (Federal Acquisition Regulations), Top-down and bottom-up forecasting, Enterprise budgeting, Cost pool planning, Indirect rates and wrap rates, ASC 606 revenue recognition principles, SaaS or software business models, Financial systems automation, Data pipelines, Advanced FP&A tooling, Automation technologies, Machine learning, Agentic AI tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":110000,"maxValue":160000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2324ce80-532"},"title":"Data Scientist - Network Value","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>\n<p>The Network Value Data Science team is helping Plaid build an industry leading fintech consumer network by increasing access to, authorization for, and usability of Plaid&#39;s User&#39;s financial footprints. We embed within product teams to support OKRs and help execute on product roadmaps. We translate ambiguous product questions into tractable analysis, serve as analytical thought partners throughout the org, identify opportunities to build better products, and champion a data-first decision making approach everywhere we go.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Perform ad-hoc and strategic analyses to uncover opportunities for improved business outcomes and translate complex questions into actionable analytics projects.</li>\n<li>Design and maintain scalable data models and dashboards that increase visibility into core systems and drive operational excellence.</li>\n<li>Build and iterate on machine learning prototypes to power insight-driven products and unlock new sources of customer and business value.</li>\n<li>Define and track OKRs that quantify progress toward key business goals, ensuring alignment and accountability across teams.</li>\n<li>Design and analyze experiments to guide product decisions and optimize feature launches.</li>\n<li>Champion a data-first culture by promoting analytical rigor and evidence-based decision-making across the organization.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>2+ years of experience as a Data Scientist or in a related analytics or data-focused role</li>\n<li>Strong track record of turning complex data into strategic insights and measurable business impact</li>\n<li>Proven ability to use experimentation, advanced analytics, and data storytelling to uncover opportunities that drive key product and business outcomes</li>\n<li>Strong technical foundation in SQL and Python for large-scale analysis, data modeling, and ML prototyping</li>\n<li>Experience developing and maintaining data pipelines and metrics frameworks using tools such as Airflow and dbt</li>\n<li>Background working with complex backend systems, ensuring data integrity, scalability, and operational reliability across platforms</li>\n<li>Skilled at partnering cross-functionally with product, engineering, and business teams to influence prioritization and strategy through clear, data-driven communication</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2324ce80-532","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/18503c02-17a0-4c47-98c8-155b0b6ccc2a","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$176,400-$243,600 per year","x-skills-required":["SQL","Python","Machine Learning","Data Modeling","Data Pipelines","Metrics Frameworks","Airflow","dbt"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:52:02.474Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, Python, Machine Learning, Data Modeling, Data Pipelines, Metrics Frameworks, Airflow, dbt","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176400,"maxValue":243600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_586b9fef-509"},"title":"Senior Software Engineer - Network Enablement (Applied ML)","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>On this team, you will build and operate the ML infrastructure and product services that enable trust and intelligence across Plaid&#39;s network. You&#39;ll own feature engineering, offline training and batch scoring, online feature serving, and real-time inference so model outputs directly power partner-facing fraud &amp; trust products and bank intelligence features.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Embed model inference into Network Enablement product flows and decision logic (APIs, feature flags, backend flows).</li>\n<li>Define and instrument product + ML success metrics (fraud reduction, retention lift, false positives, downstream impact).</li>\n<li>Design and run experiments and rollout plans (backtesting, shadow scoring, A/B tests, feature-flagged releases) to validate product hypotheses.</li>\n<li>Build and operate offline training pipelines and production batch scoring for bank intelligence products.</li>\n<li>Ship and maintain online feature serving and low-latency model inference endpoints for real-time partner/bank scoring.</li>\n<li>Implement model CI/CD, model/version registry, and safe rollout/rollback strategies.</li>\n<li>Monitor model/data health: drift/regression detection, model-quality dashboards, alerts, and SLOs targeted to partner product needs.</li>\n<li>Ensure offline and online parity, data lineage, and automated validation / data contracts to reduce regressions.</li>\n<li>Optimize inference performance and cost for real-time scoring (batching, caching, runtime selection).</li>\n<li>Ensure fairness, explainability and PII-aware handling for partner-facing ML features; maintain auditability for compliance.</li>\n<li>Partner with platform and cross-functional teams to scale the ML/data foundation (graph features, sequence embeddings, unified pipelines).</li>\n<li>Mentor engineers and document team standards for ML productization and operations.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Must-haves:</li>\n<li>Strong software engineering skills including systems design, APIs, and building reliable backend services (Go or Python preferred).</li>\n<li>Production experience with batch and streaming data pipelines and orchestration tools such as Airflow or Spark.</li>\n<li>Experience building or operating real-time scoring and online feature-serving systems, including feature stores and low-latency model inference.</li>\n<li>Experience integrating model outputs into product flows (APIs, feature flags) and measuring impact through experiments and product metrics.</li>\n<li>Experience with model lifecycle and operations: model registries, CI/CD for models, reproducible training, offline &amp; online parity, monitoring and incident response.</li>\n<li>Nice to have:</li>\n<li>Experience in fraud, risk, or marketing intelligence domains.</li>\n<li>Experience with feature-store products (Tecton / Chronon / Feast / internal) and unified pipelines.</li>\n<li>Experience with graph frameworks, graph feature engineering, or sequence embeddings.</li>\n<li>Experience optimizing inference at scale (Triton/ONNX/quantization, batching, caching).</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_586b9fef-509","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/43b1374d-5c5e-4b63-b710-a95e3cb76bbe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$286,800 per year","x-skills-required":["software engineering","systems design","APIs","backend services","Go","Python","batch and streaming data pipelines","orchestration tools","Airflow","Spark","real-time scoring","online feature-serving systems","feature stores","low-latency model inference","model outputs","product flows","experiments","product metrics","model lifecycle","operations","model registries","CI/CD","reproducible training","offline & online parity","monitoring","incident response"],"x-skills-preferred":["fraud","risk","marketing intelligence","feature-store products","unified pipelines","graph frameworks","graph feature engineering","sequence embeddings","inference at scale","Triton","ONNX","quantization","batching","caching"],"datePosted":"2026-04-17T12:51:26.228Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, systems design, APIs, backend services, Go, Python, batch and streaming data pipelines, orchestration tools, Airflow, Spark, real-time scoring, online feature-serving systems, feature stores, low-latency model inference, model outputs, product flows, experiments, product metrics, model lifecycle, operations, model registries, CI/CD, reproducible training, offline & online parity, monitoring, incident response, fraud, risk, marketing intelligence, feature-store products, unified pipelines, graph frameworks, graph feature engineering, sequence embeddings, inference at scale, Triton, ONNX, quantization, batching, caching","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":286800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21b84b4c-3f3"},"title":"Senior Robotics Engineer","description":"<p>About Mistral</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a global company with teams distributed between France, the USA, the UK, Germany, and Singapore.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Deploy state-of-the-art AI models for mobile manipulation, built in-house, on real robots</li>\n</ul>\n<ul>\n<li>Architect and optimise data pipelines for cutting-edge robotics model training on massive datasets</li>\n</ul>\n<ul>\n<li>Set up and maintain fleets of robots of various types</li>\n</ul>\n<ul>\n<li>Conduct experiments and validate robotic systems in real-world environments</li>\n</ul>\n<ul>\n<li>Interact and learn from all Mistral&#39;s engineers and researchers</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You have hands-on experience developing software for real-world robotics</li>\n</ul>\n<ul>\n<li>Mastery of Python and proven experience as a developer</li>\n</ul>\n<ul>\n<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>\n</ul>\n<ul>\n<li>You are a self-starter, autonomous and a team player</li>\n</ul>\n<ul>\n<li>You have a proactive approach with a &#39;get things done&#39; spirit</li>\n</ul>\n<ul>\n<li>You are flexible and adaptable to collaborate effectively with other engineers and researchers</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience in building and deploying AI systems on real physical robots</li>\n</ul>\n<ul>\n<li>Experience with hardware development, mechanical and CAD design, and 3D printing</li>\n</ul>\n<ul>\n<li>Experience with robotics simulators</li>\n</ul>\n<ul>\n<li>Experience with maintaining large, high-quality code bases</li>\n</ul>\n<ul>\n<li>Proficiency in coding for robotic control, such as ROS</li>\n</ul>\n<ul>\n<li>Hands-on experience with sensor integration and actuator control</li>\n</ul>\n<ul>\n<li>Knowledge of control theory, machine learning, or computer vision as applied to robotics</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive cash salary and equity</li>\n</ul>\n<ul>\n<li>Food: Daily lunch vouchers</li>\n</ul>\n<ul>\n<li>Sport: Monthly contribution to a Gympass subscription</li>\n</ul>\n<ul>\n<li>Transportation: Monthly contribution to a mobility pass</li>\n</ul>\n<ul>\n<li>Health: Full health insurance for you and your family</li>\n</ul>\n<ul>\n<li>Parental: Generous parental leave policy</li>\n</ul>\n<ul>\n<li>Visa sponsorship</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21b84b4c-3f3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/ef744f52-3ceb-42f1-84f6-1c8bde220eb1","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","AI","Robotics","Software Development","Data Pipelines","ROS"],"x-skills-preferred":["Hardware Development","Mechanical Design","CAD","3D Printing","Robotics Simulators","Sensor Integration","Actuator Control","Control Theory","Machine Learning","Computer Vision"],"datePosted":"2026-04-17T12:48:00.221Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, AI, Robotics, Software Development, Data Pipelines, ROS, Hardware Development, Mechanical Design, CAD, 3D Printing, Robotics Simulators, Sensor Integration, Actuator Control, Control Theory, Machine Learning, Computer Vision"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2256e99-10a"},"title":"Research Engineer, Machine Learning","description":"<p>About Mistral AI</p>\n<p>Mistral AI is a pioneering company shaping the future of AI. They believe in the power of AI to simplify tasks, save time, and enhance learning and creativity.</p>\n<p>Role Summary</p>\n<p>The Research Engineering team at Mistral AI spans Platform (shared infra &amp; clean code) and Embedded (inside research squads). Engineers can move along the research↔production spectrum as needs or interests evolve. As a Research Engineer – ML track, you’ll build and optimise the large-scale learning systems that power their open-weight models.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Accelerate researchers by taking on the heavy parts of large-scale ML pipelines and building robust tools.</li>\n<li>Interface cutting-edge research with production: integrate checkpoints, streamline evaluation, and expose APIs.</li>\n<li>Conduct experiments on the latest deep-learning techniques (sparsified 70 B + runs, distributed training on thousands of GPUs).</li>\n<li>Design, implement and benchmark ML algorithms; write clear, efficient code in Python.</li>\n<li>Deliver prototypes that become production-grade components for Le Chat and their enterprise API.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Master’s or PhD in Computer Science (or equivalent proven track record).</li>\n<li>4 + years working on large-scale ML codebases.</li>\n<li>Hands-on with PyTorch, JAX or TensorFlow; comfortable with distributed training (DeepSpeed / FSDP / SLURM / K8s).</li>\n<li>Experience in deep learning, NLP or LLMs; bonus for CUDA or data-pipeline chops.</li>\n<li>Strong software-design instincts: testing, code review, CI/CD.</li>\n<li>Self-starter, low-ego, collaborative.</li>\n</ul>\n<p>What we offer</p>\n<ul>\n<li>Competitive salary and equity.</li>\n<li>Healthcare: Medical/Dental/Vision covered for you and your family.</li>\n<li>Pension: 401K (6% matching)</li>\n<li>PTO: 18 days</li>\n<li>Transportation: Reimburse office parking charges, or $120/month for public transport</li>\n<li>Sport: $120/month reimbursement for gym membership</li>\n<li>Meal stipend: $400 monthly allowance for meals (solution might evolve as they grow bigger)</li>\n<li>Visa sponsorship</li>\n<li>Coaching: they offer BetterUp coaching on a voluntary basis</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2256e99-10a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai/careers","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/bada0014-0f32-4370-b55f-81c5595c7339","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PyTorch","JAX","TensorFlow","Distributed training","Deep learning","NLP","LLMs","CUDA","Data pipeline"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:47:41.659Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PyTorch, JAX, TensorFlow, Distributed training, Deep learning, NLP, LLMs, CUDA, Data pipeline"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c354cb79-c49"},"title":"Staff Software Engineer - Fraud","description":"<p>Every new business that applies to Mercury is like a new star appearing in the night sky. On its own, it’s a single point of light. But when we look closer, patterns emerge,data trails from partners, filings, founders, and financial histories,all connecting to form a larger constellation.</p>\n<p>That’s what our Risk product engineering teams do at Mercury. We guide thousands of business applications through our systems,each one unique, each one needing a smooth and trustworthy landing. The challenge: keep everything moving fast without compromising safety. Every day, our work helps founders open their first account, launch their next idea, and accelerate their growth like rocketships.</p>\n<p>Our mission is to build the intelligent, automated systems and operational tools that make this possible,where machine learning, AI, and human judgment work seamlessly together to power the next generation of business banking*. We use intelligence to detect risks earlier, make real-time decisions with confidence, and enable instant, delightful account approvals that keep pace with the builders we serve.</p>\n<p>When we do it right, the result is quiet brilliance: onboarding that feels effortless, even though it’s powered by galaxies of data, precision, and care.</p>\n<p>We’re looking for a Staff Software Engineer to contribute with building the systems and tools that make it all happen,from application approvals to ongoing and enhanced due diligence,ensuring every business that joins Mercury is both safe and their experience is delightful.</p>\n<p>As part of this role, you will:</p>\n<ul>\n<li>Lead the architecture, implementation, and long-term roadmap for core systems which support multiple fraud prevention use cases.</li>\n<li>Own the end-to-end delivery of large cross-function projects, translating ambiguous high impact problems into strategy and execution, make pragmatic tradeoffs, and drive teams to measurable outcomes.</li>\n<li>Design, build, and operate highly available, low-latency, backend systems that enable real-time scoring and decisioning for fraud prevention.</li>\n<li>Partner with Data Science and ML teams to productionize models, build reliable ML data pipelines, and enable real-time model serving.</li>\n<li>Establish and elevate department level best practices, review designs, drive engineering quality, and act as a trusted advisor on architectural tradeoffs.</li>\n<li>Mentor and grow engineers, documenting learnings and sharing technical direction through writing, 1:1s, and team meetings</li>\n<li>Navigate and influence multiple stakeholders, help align teams, communicate tradeoffs to technical and non-technical partners, and independently resolve cross team issues.</li>\n</ul>\n<p>The ideal candidate for the role:</p>\n<ul>\n<li>Has 7-10+ years of software development experience, with a strong focus on the backend, with a knowledge of or excitement to learn Haskell.</li>\n<li>Has been an experienced technical lead making architectural decisions in the past and seen the impact of those decisions over time. You should be able to clearly articulate your technical opinions and lay out tradeoffs.</li>\n<li>Is passionately product-minded and has experience building and shipping new products balancing reliability and velocity.</li>\n<li>Is someone comfortable driving discussions in areas with ambiguous ownership, approaches them with empathy, and delights in getting outcomes. Our work touches many other teams and areas of the product; you’ll have a lot of autonomy and the expectation is you’ll use that to seek out ways to have an impact.</li>\n<li>Is ridiculously helpful, taking initiative to make your coworkers’ lives easier by investing time to mentor and proactively share your knowledge and learnings through writings, 1:1s, and team meetings.</li>\n<li>Experience in fintech, fraud systems, or the broader risk domain is a strong plus.</li>\n</ul>\n<p>If this role interests you, we invite you to explore our public demo at personal-demo.mercury.com .</p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $239,000 - $298,800</li>\n<li>Canadian employees (any location): CAD 225,900 - 282,400</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c354cb79-c49","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5847987004","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$239,000 - $298,800 (US) or CAD 225,900 - 282,400 (Canada)","x-skills-required":["Haskell","Backend development","Fraud prevention","Machine learning","AI","Data science","ML data pipelines","Real-time model serving"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:47:27.741Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Haskell, Backend development, Fraud prevention, Machine learning, AI, Data science, ML data pipelines, Real-time model serving","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":225900,"maxValue":298800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b11932f-d81"},"title":"Senior Software Engineer - Banking Integration Platform","description":"<p>When the Space Shuttle approached the International Space Station, two vehicles built by different teams, in different countries, with fundamentally different engineering philosophies and systems, had to connect perfectly. The Rendezvous, Proximity Operations, and Docking (RPOD) subsystems were engineered to handle complex mismatches such as different power systems, communication protocols, and technical architectures. Get it wrong, and you have an expensive and potentially catastrophic problem in low Earth orbit.</p>\n<p>Mercury is building a bank and will be connecting our modern, product-focused engineering systems to enterprise core banking systems and payment networks built in a different era, with different assumptions and different interfaces. Our Banking Integration Platform as a Service team is like NASA’s RPOD team, building our integration subsystems that are technically correct and operationally trustworthy.</p>\n<p>This is some of the most consequential infrastructure work at Mercury. Every account opening, every monetary transaction, and every balance call will flow through the systems you build. Product teams across the company will depend on clean abstractions that hide the complexity underneath. You&#39;ll be one of the few engineers at Mercury who truly understands the full depth of our Bank Core* and all its internal and external integrations.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Build Mercury’s integration with an FFIEC-approved bank core and the connections to payment networks.</li>\n<li>Design internal APIs that give product teams simple, consistent interfaces to complex external systems.</li>\n<li>Handle the messy realities of enterprise integrations such as retries, failures, format mismatches, and downtime.</li>\n<li>Build data pipelines that keep Mercury&#39;s systems in sync with our bank core.</li>\n<li>Own monitoring, alerting, and recovery for our most critical external connections.</li>\n<li>Partner with many other teams at Mercury to define clean boundaries and reliable contracts.</li>\n<li>Help shape the technical architecture of Mercury Bank*.</li>\n</ul>\n<p>You should:</p>\n<ul>\n<li>Have direct experience with either a bank core that has achieved FFIEC-compliance (such as FIS) or that of a US-based Global Systemically Important Bank (G-SIB).</li>\n<li>Understand how core banking systems work: accounts, transactions, ledgers, and the data models underneath.</li>\n<li>Be a product-minded engineer who thinks about the developers consuming your APIs, not just the systems you’re connecting to.</li>\n<li>Thrive in environments where you&#39;re building something new rather than maintaining something established.</li>\n<li>Be comfortable with our tech stack (Haskell and TypeScript) or ready to learn.</li>\n<li>Have strong opinions about building reliable, maintainable systems.</li>\n</ul>\n<p>The total rewards package at Mercury includes base salary, equity, and benefits.</p>\n<p>Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $166,600 - $250,900</li>\n<li>Canadian employees (any location): CAD 157,400 - 237,100</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b11932f-d81","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5791111004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,600 - $250,900 (US employees), CAD 157,400 - 237,100 (Canadian employees)","x-skills-required":["bank core","FFIEC-compliance","Haskell","TypeScript","API design","data pipelines","monitoring","alerting","recovery"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:46:21.374Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"bank core, FFIEC-compliance, Haskell, TypeScript, API design, data pipelines, monitoring, alerting, recovery","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":157400,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f5ef2a17-622"},"title":"Senior Product Manager - Ledger","description":"<p>We&#39;re looking for a Senior Product Manager to own Mercury&#39;s Ledger platform, a double-entry bookkeeping engine that records every financial event as balanced, auditable entries. The Ledger team sits at the intersection of financial infrastructure, regulatory accountability, and org-wide platform adoption.</p>\n<p>As part of the journey, we would expect you to:</p>\n<ul>\n<li>Own the Ledger roadmap: Drive strategy and execution for Mercury&#39;s Double Entry Ledger (DEL),the financial system of record.</li>\n<li>Drive org-wide ledger adoption: Partner with payment rails teams to enable their flows on the ledger platform, ensuring event-to-ledger integration is clean, correct, and scalable.</li>\n<li>Build next generation financial infrastructure: Own the implementation of our financial processes across charts of accounts, transfer code definitions, and GL mapping frameworks that connect to our core systems of record.</li>\n<li>Partner with Finance and Accounting: Translate accounting requirements into product architecture, and translate product decisions back to stakeholders who think in debits, credits, journal entries, and regulatory reports.</li>\n<li>Set the financial data standard: Define what clean, reconcilable financial data looks like at Mercury. Work with the Reconciliation team to ensure every ledger entry is accurate, traceable, and exception-free.</li>\n<li>Navigate a complex, high-stakes stakeholder environment: Coordinate across banking platform teams, payment rails teams, Finance, Compliance, Risk,all of whom depend on your platform and have a voice in how it evolves.</li>\n</ul>\n<p>Some things that might make you successful in a role like this:</p>\n<ul>\n<li>7+ years of product management experience in fintech, financial services, or platform/infrastructure product roles.</li>\n<li>Real technical fluency: you can engage credibly with engineers on event-sourced architectures, database consistency models, double-entry accounting data models, and financial data pipelines,not as a generalist, but as a peer.</li>\n<li>Meaningful background in accounting, finance, or banking operations,you understand chart of accounts, GL mapping, journal entries, reconciliation workflows, and why they matter to regulators.</li>\n<li>Experience building and scaling internal platform products with broad adoption requirements,you know how to work with adopter teams, reduce onboarding friction, and drive org-wide standardization across a distributed set of engineers.</li>\n<li>A highly structured, data-driven approach to product decisions: you think in systems, model second and third-order effects, and communicate tradeoffs in a way that lands with both engineering and finance audiences.</li>\n<li>Exceptional cross-functional communication,you earn credibility in rooms with accountants, engineers, compliance officers, and regulators, and you hold your own in all of them.</li>\n</ul>\n<p>The total rewards package at Mercury includes base salary, equity, and benefits.</p>\n<p>Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $200,700 - $250,900</li>\n<li>Canadian employees (any location): CAD 189,700 - 237,100</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f5ef2a17-622","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5832762004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 (US employees) or CAD 189,700 - 237,100 (Canadian employees)","x-skills-required":["product management","fintech","financial services","platform/infrastructure product roles","event-sourced architectures","database consistency models","double-entry accounting data models","financial data pipelines","accounting","finance","banking operations","chart of accounts","GL mapping","journal entries","reconciliation workflows"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:45:21.199Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Fintech","skills":"product management, fintech, financial services, platform/infrastructure product roles, event-sourced architectures, database consistency models, double-entry accounting data models, financial data pipelines, accounting, finance, banking operations, chart of accounts, GL mapping, journal entries, reconciliation workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189700,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e92655b-cbb"},"title":"Senior Data Scientist - Banking","description":"<p>We&#39;re looking for a full-stack Data Scientist to support our Cards &amp; Credit roadmap, partnering closely with Product, Engineering, Design, Underwriting, and Operations to shape how our card and credit products evolve and scale.</p>\n<p>In this role, you&#39;ll apply strong analytical judgment and product intuition to help us understand customer behaviour, evaluate trade-offs, and make smart investment decisions across the cards and lending lifecycles , from eligibility and activation to spend, retention, incentives, and credit performance. You&#39;ll help build a data-informed culture across Mercury so teams can move quickly, measure what matters, and invest intelligently.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Bringing impeccable communication and complete ownership , independently identifying opportunities, developing strong points of view, and influencing executives, Cards &amp; Credit leaders, and cross-functional partners through clear, concise, and persuasive storytelling.</li>\n<li>Developing a nuanced understanding of cardholder behaviour and economics, helping teams reason about trade-offs between growth, engagement, risk, and unit economics.</li>\n<li>Defining, owning, and analysing metrics that inform both tactical decisions and long-term strategy across the cards and credit lifecycle (e.g., eligibility, activation, spend, utilisation, rewards, retention, loss signals).</li>\n<li>Designing and evaluating experiments using rigorous statistical approaches, including A/B testing, cohort analysis, causal inference techniques, and trend analysis.</li>\n<li>Building and improving data pipelines and tools to streamline data collection, processing, and analysis workflows, ensuring the integrity, reliability, and security of data assets.</li>\n<li>Building and deploying predictive models to forecast key outcomes, inform product treatments, and deepen understanding of causal drivers.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>7+ years of experience working with large datasets to drive product or business impact in data science or analytics roles.</li>\n<li>Fluency in SQL and comfort with Python.</li>\n<li>Strong judgment in defining and analysing product metrics, running experiments, and translating ambiguous questions into structured analyses.</li>\n<li>Exceptional proactivity and independence , identifying opportunities, forming strong points of view, and making your case to stakeholders.</li>\n<li>Experience with ETL processes and modern data modelling (e.g., dbt, dimensional models, Airflow), with a solid understanding of how data is produced and consumed.</li>\n<li>Experience in analytical approaches ranging from behavioural modelling to experimentation to optimisation , and, importantly, know when simpler approaches are the right answer.</li>\n<li>Apply AI tools to accelerate analytical and business workflows, improving scalability, decision quality, and reducing manual or repetitive work across teams.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience working on cards or credit products, with familiarity in card economics and lifecycle concepts (e.g., spend behaviour, interchange, rewards and incentives, utilisation, credit limits, retention).</li>\n<li>Experience developing quantitative pricing models or engines (e.g., dynamic pricing, incentive optimisation, or marketplace pricing systems).</li>\n<li>Experience applying optimisation techniques to resource allocation or decision systems (e.g., customer operations, capacity planning, or policy optimisation).</li>\n<li>Experience building or supporting credit models, including probability of default modelling, cashflow modelling, or dynamic credit limit setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e92655b-cbb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5799320004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 USD","x-skills-required":["SQL","Python","ETL processes","modern data modelling","A/B testing","cohort analysis","causal inference techniques","trend analysis","data pipelines","predictive models"],"x-skills-preferred":["cardholder behaviour and economics","quantitative pricing models","optimisation techniques","credit models","probability of default modelling","cashflow modelling","dynamic credit limit setting"],"datePosted":"2026-04-17T12:45:16.180Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, ETL processes, modern data modelling, A/B testing, cohort analysis, causal inference techniques, trend analysis, data pipelines, predictive models, cardholder behaviour and economics, quantitative pricing models, optimisation techniques, credit models, probability of default modelling, cashflow modelling, dynamic credit limit setting","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200700,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18fad01e-942"},"title":"Salesforce Developer","description":"<p>We&#39;re hiring a Salesforce Developer to deepen Mercury&#39;s technical bench. This role fills a critical gap today: hands-on engineering capacity to implement platform capabilities that already exist on paper , and reduce tool sprawl by building stronger foundations directly into Salesforce and adjacent systems.</p>\n<p>As a Salesforce Developer, you&#39;ll work closely with Architecture, Data, TPM, and Systems Experience to turn intent into reality. Your responsibilities will include:</p>\n<ul>\n<li>Building and maintaining Salesforce functionality (flows, automation, objects, permissions)</li>\n<li>Implementing architectural designs without diverging from intent</li>\n<li>Improving reliability, performance, and maintainability of GTM systems</li>\n<li>Reducing tech debt and replacing fragile workarounds with durable solutions</li>\n<li>Partnering with Data Strategy to ensure clean data generation</li>\n<li>Supporting integrations and tooling across the revenue stack</li>\n<li>Participating in incident response and platform debugging</li>\n<li>Helping migrate functionality into core platforms rather than adding new tools</li>\n</ul>\n<p>To succeed in this role, you&#39;ll need:</p>\n<ul>\n<li>8+ years experience in Salesforce development or platform engineering roles</li>\n<li>Strong hands-on experience with Salesforce automation, flows, object models, permissions, and integrations</li>\n<li>Excited to own and maintain API-based integrations between Salesforce and downstream/upstream systems</li>\n<li>Demonstrated ability to build and refactor systems with durability, performance, and maintainability in mind</li>\n<li>Experience partnering with cross-functional teams to implement technical solutions</li>\n<li>Strong debugging and problem-solving skills in production environments</li>\n<li>Clear communication skills and comfort explaining technical tradeoffs</li>\n</ul>\n<p>Preferred qualifications include experience with Salesforce Data Cloud, familiarity with GTM workflows, revenue operations, or customer lifecycle systems, and exposure to data pipelines, ETL processes, or downstream analytics usage.</p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate&#39;s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p>Our target new hire base salary ranges for this role are:</p>\n<ul>\n<li>US employees in New York City, Los Angeles, Seattle, or the San Francisco Bay Area: $158,400 - 198,000</li>\n<li>US employees outside of New York City, Los Angeles, Seattle, or the San Francisco Bay Area: $142,600 - 178,200</li>\n<li>Canadian employees (any location): CAD $149,700 - $187,100</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18fad01e-942","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5857783004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$142,600 - 198,000","x-skills-required":["Salesforce development","Platform engineering","Automation","Flows","Object models","Permissions","Integrations","API-based integrations","Data strategy","GTM systems","Revenue stack","Incident response","Platform debugging"],"x-skills-preferred":["Salesforce Data Cloud","GTM workflows","Revenue operations","Customer lifecycle systems","Data pipelines","ETL processes","Downstream analytics usage"],"datePosted":"2026-04-17T12:45:15.149Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Salesforce development, Platform engineering, Automation, Flows, Object models, Permissions, Integrations, API-based integrations, Data strategy, GTM systems, Revenue stack, Incident response, Platform debugging, Salesforce Data Cloud, GTM workflows, Revenue operations, Customer lifecycle systems, Data pipelines, ETL processes, Downstream analytics usage","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":142600,"maxValue":198000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3048ccd4-7de"},"title":"Data Analyst","description":"<p>We are seeking a Data Analyst to join our growing data team. As a Data Analyst at LayerZero, you will be at the forefront of shaping a rich data foundation for a company making a real impact in the web3 space. You will work closely with teams and leaders to uncover insights, drive decision-making, and fuel our next-generation products and services.</p>\n<p>The successful candidate will dive headfirst into the world of crypto data, exploring on-chain wallets and contracts, block and transaction data, insights from in-house systems, and third-party intelligence. Your mission will be to combine these diverse datasets into rich, actionable data products for a broad group of stakeholders.</p>\n<p>Key responsibilities include:\nLeveraging and expanding our ever-growing Kimball dimensional model.\nWriting SQL to create and expand insights in our in-house reporting solutions.\nCollaborating with stakeholders across the organization to conduct ad-hoc explorations and analytics.\nBeing a key owner of data quality, building out insights that serve the data team itself.\nComposing pipelines by writing SQL code to clean, combine, refine, and aggregate data into the insights the organization needs.\nCollaborating on new datasets to ingest into our Snowflake data warehouse, working closely with data engineers on your team.\nNot afraid of pushing code that supports tens of billions of dollars in daily transaction volume.</p>\n<p>We are looking for someone with previous data analyst experience, likely with a bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Physics or related field, but we also consider and highly value equivalent practical experience.</p>\n<p>Required skills include strong SQL knowledge and experience, proven track record in data modeling, statistics, and analytics, experience working with a broad range of stakeholders, and strong convictions weakly held.\nNice to have skills include experience with general programming, experience with Snowflake, experience building DAG-based data pipelines, experience with streaming real-time data pipelines, previous experience with blockchain technologies, smart contracts, and decentralized finance, experience with Kimball dimensional modeling, and working on a mid-to-large scale data stacks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3048ccd4-7de","directApply":true,"hiringOrganization":{"@type":"Organization","name":"LayerZero","sameAs":"https://layerzero.com/","logo":"https://logos.yubhub.co/layerzero.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/layerzerolabs/jobs/5787956004","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data modeling","statistics","analytics","Snowflake","Kimball dimensional modeling"],"x-skills-preferred":["general programming","DAG-based data pipelines","streaming real-time data pipelines","blockchain technologies","smart contracts","decentralized finance"],"datePosted":"2026-04-17T12:41:37.110Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, BC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data modeling, statistics, analytics, Snowflake, Kimball dimensional modeling, general programming, DAG-based data pipelines, streaming real-time data pipelines, blockchain technologies, smart contracts, decentralized finance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62efca6f-b6f"},"title":"Senior AI Engineer","description":"<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>\n<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>\n<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>\n<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>\n<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>\n<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>\n<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>\n<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>\n<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>\n<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>\n<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>\n<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>\n<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>\n<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>\n<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>\n<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>\n<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>\n<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>\n<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62efca6f-b6f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Jeeves","sameAs":"https://www.jeeves.com/","logo":"https://logos.yubhub.co/jeeves.com.png"},"x-apply-url":"https://jobs.lever.co/tryjeeves/ded9e04e-f18e-4d4c-ae43-4b7882c6200b","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["LLM","AI","Python","LangChain","LlamaIndex","OpenAI API","Anthropic API","HuggingFace","vector databases","Pinecone","Weaviate","pgvector","semantic search","RAG-based features","document ingestion","chunking pipelines","embedding model selection","chunk strategy","metadata filtering","re-ranking techniques","model serving infrastructure","latency SLOs","input validation","output monitoring","model performance monitoring","data drift detection","clean data pipelines","feature engineering","API contracts","circuit breakers","graceful degradation patterns","structured logging","distributed tracing","latency dashboards","alerting"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:39:23.341Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, semantic search, RAG-based features, document ingestion, chunking pipelines, embedding model selection, chunk strategy, metadata filtering, re-ranking techniques, model serving infrastructure, latency SLOs, input validation, output monitoring, model performance monitoring, data drift detection, clean data pipelines, feature engineering, API contracts, circuit breakers, graceful degradation patterns, structured logging, distributed tracing, latency dashboards, alerting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58df2f04-af4"},"title":"Data Engineer","description":"<p>We are looking for a Data Engineer to join our Data Platform team to partner with our product and business stakeholders across risk, operations, and other domains. As a Data Engineer, you will be responsible for building robust data pipelines and engineering foundations by ingesting data from disparate sources, ensuring data quality and consistency, and enabling better business decisions through reliable data infrastructure across core product areas.</p>\n<p>Your primary focus will be on building scalable data pipelines using Airflow to orchestrate data workflows that ingest, transform, and deliver data from various sources into Snowflake and Databricks. You will also design and implement data models in Snowflake that support analytics, reporting, and ML use cases with a focus on performance, reliability, and scalability.</p>\n<p>In addition, you will develop infrastructure as code using Terraform to automate and manage cloud resources in AWS, ensuring consistent and reproducible deployments. You will monitor data pipeline health and implement data quality checks to ensure accuracy, completeness, and timeliness of data as business needs evolve.</p>\n<p>You will also optimize data processing workflows to improve performance, reduce costs, and handle growing data volumes efficiently. Troubleshooting and resolving data pipeline issues, working through ambiguity to get to the root cause and implementing long-term fixes will be a key part of your role.</p>\n<p>As a Data Engineer, you will bridge gaps between data and the business by working with cross-functional teams across the US and India office to understand requirements and translate them into robust technical solutions. You will create comprehensive documentation on data pipelines, data models, and infrastructure, keeping documentation up to date and facilitating knowledge transfer across the team.</p>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>2+ years of data engineering experience with strong technical skills and the ability to architect scalable data solutions.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python for data processing, automation, and building data pipelines.</li>\n</ul>\n<ul>\n<li>Proficiency with workflow orchestration tools, preferably Airflow, including DAG development, task dependencies, and monitoring.</li>\n</ul>\n<ul>\n<li>Strong SQL skills and experience with cloud data warehouses like Snowflake, including performance optimization and data modeling.</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms, preferably AWS (S3, Lambda, EC2, IAM, etc.), and understanding of cloud-based data architectures.</li>\n</ul>\n<ul>\n<li>Experience working cross-functionally with data analysts, analytics engineers, data scientists, and business stakeholders to understand requirements and deliver solutions.</li>\n</ul>\n<ul>\n<li>An ownership mentality – this engineer will be responsible for the reliability and performance of their data pipelines and expected to fully understand data flows, dependencies, and their implications on downstream users.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience with dbt for transformation logic and analytics engineering workflows integrated with data pipelines.</li>\n</ul>\n<ul>\n<li>Familiarity with Databricks for large-scale data processing, including Spark optimization and Delta Lake.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and data infrastructure.</li>\n</ul>\n<ul>\n<li>Knowledge of data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, slowly changing dimensions).</li>\n</ul>\n<ul>\n<li>Experience with CI/CD practices for data pipelines and automated testing frameworks.</li>\n</ul>\n<ul>\n<li>Experience with streaming data and real-time processing frameworks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58df2f04-af4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/e98d9733-8b8c-4ce4-997d-6cf14e35b2f3","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Python","SQL","Snowflake","Databricks","AWS","Terraform","data engineering","data pipelines","data modeling"],"x-skills-preferred":["dbt","Infrastructure as Code","CI/CD","streaming data","real-time processing"],"datePosted":"2026-04-17T12:36:30.660Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Airflow, Python, SQL, Snowflake, Databricks, AWS, Terraform, data engineering, data pipelines, data modeling, dbt, Infrastructure as Code, CI/CD, streaming data, real-time processing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb26af8f-c1a"},"title":"Data Scientist","description":"<p>We are seeking a pragmatic, end-to-end Data Scientist who can operate across the full data lifecycle, from ingestion and modeling to productionizing key data systems. This is a high-impact, high-agency role which reports directly to the CTO. Modern AI-assisted development tools make this role possible, where the data scientist can now do real engineering, too.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collaborate closely with other teams (Sales, Finance, Product, Marketing, and more) to translate problems and needs into action-oriented data solutions</li>\n<li>Design, build, and maintain data pipelines for reliable ingestion and transformation</li>\n<li>Rapidly prototype and iterate using AI coding tools to accelerate development and reduce toil</li>\n<li>Drive rigor and best practices, with a focus on data quality, consistency, and transparency</li>\n<li>Develop and deploy statistical models and machine learning, where appropriate</li>\n<li>Build clear, decision-oriented visualizations and dashboards for stakeholders across multiple departments</li>\n<li>Own selected production data systems: selection, orchestration, monitoring, and tuning</li>\n<li>Communicate and shepherd key data results and analysis to executives</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Experience with B2B SaaS-relevant data, including Salesforce and financial metrics</li>\n<li>Strong communication skills and ability to work effectively across multiple departments and stakeholder groups</li>\n<li>Ownership mindset and ability to deliver end-to-end outcomes independently; must be a &quot;startup type&quot;</li>\n<li>Demonstrated ability to design data pipelines and work with imperfect, evolving data sources</li>\n<li>Sharp attention to data quality, including validation, anomaly detection, and root-cause analysis of inconsistencies</li>\n<li>Strong proficiency in Python and SQL; experience with modern data stack tools (e.g., dbt, Airflow, Spark, or equivalents, a plus)</li>\n<li>Experience with data visualization tools (e.g., Tableau, Looker, or similar)</li>\n<li>Some familiarity with infrastructure and related setup (databases, data warehouses, VMs)</li>\n<li>Knowledge of core machine learning concepts and when to apply them pragmatically</li>\n</ul>\n<p><strong>Initial Projects:</strong></p>\n<ul>\n<li>Build a likelihood-of-close model for Salesforce opportunities, which factors in relevant metadata and history</li>\n<li>Create a framework and initial implementation for an executive operational dashboard, working with a broad set of teams</li>\n<li>Define, validate, and implement key SaaS product-usage metrics</li>\n</ul>\n<p>As we grow, you will, too, with the broad scope of a software startup.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb26af8f-c1a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Forward Networks","sameAs":"https://www.forward.net/","logo":"https://logos.yubhub.co/forward.net.png"},"x-apply-url":"https://job-boards.greenhouse.io/forwardnetworks/jobs/7695301003","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$170,000 - $190,000","x-skills-required":["Python","SQL","data visualization","machine learning","data pipelines","data quality"],"x-skills-preferred":["dbt","Airflow","Spark","Tableau","Looker"],"datePosted":"2026-04-17T12:34:58.040Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Santa Clara, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, data visualization, machine learning, data pipelines, data quality, dbt, Airflow, Spark, Tableau, Looker","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":190000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f1dd2777-187"},"title":"Sr/Staff Software Engineer - Payments","description":"<p>We are seeking a skilled Software Engineer to join our Engineering team in San Francisco. The successful candidate will help design and build the next generation of usage-based billing systems that integrate tightly with Stripe and Orb, power real-time usage tracking, and deliver accurate, flexible billing experiences for customers.</p>\n<p>As a Sr/Staff Software Engineer, you will work cross-functionally with Product, Finance, and Infrastructure teams to ensure our billing system is robust, accurate, and capable of supporting new pricing models as our product grows.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design and build event-driven billing systems that process real-time usage data.</li>\n<li>Integrate with Orb for usage metering and Stripe for payments and invoicing.</li>\n<li>Build Python-based microservices running on Kubernetes to handle billing workflows.</li>\n<li>Develop data storage and processing flows for downstream analysis in BigQuery.</li>\n<li>Collaborate with product engineers to build Next.js dashboards and admin tools for billing insights and reconciliation.</li>\n<li>Ensure billing systems are accurate, auditable, and scalable to support new product launches and pricing models.</li>\n<li>Partner with Finance to automate reporting, reconciliation, and revenue analytics.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Experience with usage-based billing systems or event-driven architectures.</li>\n<li>Strong Python skills for backend microservices.</li>\n<li>Familiarity with Stripe (payments, invoicing) and Orb (usage metering) APIs.</li>\n<li>Experience with Postgres for transactional data and BigQuery for analytics.</li>\n<li>Experience with Kubernetes and containerized deployments.</li>\n<li>Ability to build admin interfaces or customer dashboards using Next.js.</li>\n<li>Comfort working with event-driven data pipelines (e.g., Kafka, Pub/Sub, or similar).</li>\n<li>Strong cross-functional collaboration skills with Finance, Product, and Data teams.</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience with FinTech, SaaS, or cloud usage billing at scale.</li>\n<li>Familiarity with cloud providers (AWS, GCP) and their billing models.</li>\n<li>Knowledge of pricing experimentation or monetization platforms.</li>\n</ul>\n<p>Compensation:</p>\n<ul>\n<li>$160,000 - $200,000 + equity + comprehensive benefits package</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f1dd2777-187","directApply":true,"hiringOrganization":{"@type":"Organization","name":"fal","sameAs":"https://fal.com","logo":"https://logos.yubhub.co/fal.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/fal/jobs/4063798009","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 - $200,000","x-skills-required":["Python","Stripe","Orb","Postgres","BigQuery","Kubernetes","Next.js","event-driven data pipelines"],"x-skills-preferred":["FinTech","SaaS","cloud usage billing","cloud providers","pricing experimentation or monetization platforms"],"datePosted":"2026-04-17T12:32:10.513Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Stripe, Orb, Postgres, BigQuery, Kubernetes, Next.js, event-driven data pipelines, FinTech, SaaS, cloud usage billing, cloud providers, pricing experimentation or monetization platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a88ee59-dc6"},"title":"Full Stack Engineer (Serverless)","description":"<p>We&#39;re building the fastest and most scalable infrastructure for AI inference. As a Full Stack Engineer on Serverless, you will build the core product across frontend and backend that powers our Serverless platform. This is a deeply product-focused role where you will work side-by-side with Product and Infrastructure to design and ship reusable, scalable systems that enterprise customers rely on in production every day.</p>\n<p>You will be a foundational technical owner of our Serverless product as it scales to thousands of enterprise customers, with real responsibility, autonomy, and impact. This is a chance to help build a new product vertical from the ground up inside a company that is already scaling at rocket-ship speed.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Building and maintaining core Serverless UI features (dashboards, logs, observability, configuration, usage)</li>\n<li>Designing and implementing backend APIs that power the Serverless product experience</li>\n<li>Improving performance, reliability, and scalability of customer-facing systems</li>\n<li>Working closely with Infrastructure to ensure product features align with platform capabilities</li>\n<li>Owning features end-to-end, from design through production and iteration</li>\n</ul>\n<p>We&#39;re looking for a strong experience working across both frontend and backend, proficiency with TypeScript, Python, Postgres, and Next.js, and experience owning features end-to-end in production systems. Ability to context switch between UI, backend, and performance work, product-minded engineer who values clean abstractions and long-term maintainability, comfortable working in a fast-moving, low-process environment.</p>\n<p>Nice to have experience building developer platforms or infrastructure-adjacent products, familiarity with observability tooling (logging, metrics, tracing) in production environments, background in distributed systems, container orchestration, or cloud-native architectures, experience with real-time systems, streaming logs, or high-throughput data pipelines, exposure to technologies such as Kubernetes, Prometheus, Datadog, gRPC, or similar systems, entrepreneurial mindset and strong ownership mentality.</p>\n<p>We offer interesting and challenging work, competitive salary and equity, a lot of learning and growth opportunities, visa sponsorship and relocation assistance, health, dental, and vision insurance, regular team events and offsite.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a88ee59-dc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fal","sameAs":"https://www.fal.com/","logo":"https://logos.yubhub.co/fal.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/fal/jobs/4112697009","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$150,000 - $230,000 + equity + comprehensive benefits package","x-skills-required":["TypeScript","Python","Postgres","Next.js","serverless","backend APIs","frontend development"],"x-skills-preferred":["observability tooling","distributed systems","container orchestration","cloud-native architectures","real-time systems","streaming logs","high-throughput data pipelines","Kubernetes","Prometheus","Datadog","gRPC"],"datePosted":"2026-04-17T12:32:02.355Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Postgres, Next.js, serverless, backend APIs, frontend development, observability tooling, distributed systems, container orchestration, cloud-native architectures, real-time systems, streaming logs, high-throughput data pipelines, Kubernetes, Prometheus, Datadog, gRPC","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":230000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6f1b9b08-689"},"title":"Product Manager, Internal Tools & Player Insights","description":"<p>What&#39;s the one game you couldn&#39;t put down? The game that connected you with friends, and made you feel like you belonged? If a game has ever defined a chapter of your life, then you already know the spark we&#39;re chasing.</p>\n<p>Our mission is to ignite that same feeling for players; the thrill of competition, the joy of community, and the belonging of finding your own corner of a larger world.</p>\n<p>Great games begin with people who dare to dream big. If that sounds exciting, you might be exactly who we&#39;re looking for.</p>\n<p>Bonfire is a group of experienced and ambitious developers, proud to be creating our first original IP: Arkheron. It is a fast-paced, competitive PVP game set in a surreal dark fantasy world where 15 teams of three battle their way up the Tower. In a world built from memories, you will loot powerful items to create and adapt a unique build-out that will change your strategy and combat experience with every Ascension.</p>\n<p>At Bonfire, we believe great games are built on more than features , they&#39;re built on deep player understanding, thoughtful decisions, and the tools that help teams act on both. Our Platform group builds the systems that connect players to the studio: analytics that reveal how people play, community tools that shape how they connect, and marketing and messaging infrastructure that determines when, where, and how we show up for them. This team owns the foundation behind Arkheron and future projects , from data pipelines and experimentation frameworks to community and martech systems that power real player-facing experiences.</p>\n<p>This role focuses on the shared platform systems and tools that enable teams to make better decisions and deliver better player experiences at scale. (Note: responsibilities such as owning game feature development, monetization design, or franchise strategy fall within other roles at Bonfire).</p>\n<p>If you&#39;re energized by shaping what gets built, why it matters to players, and how teams make better decisions through strong systems and tooling, this could be a great role for you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Owning product direction for platform systems that enable and support player-facing experiences, shaping roadmap, priorities, and long-term vision.</li>\n<li>Translating player behavior into action, partnering closely with analytics and insights to turn telemetry, experimentation, and qualitative signals into clear product opportunities.</li>\n<li>Prioritizing ruthlessly by weighing player value, studio leverage, and technical investment to decide what to build, what to buy, and what not to do.</li>\n<li>Defining requirements for platform capabilities (such as messaging, campaigns, triggers, segmentation, and experimentation) used by game and publishing teams to deliver customer-facing experiences.</li>\n<li>Shaping how the studio interacts with players at scale through shared systems and tooling, without setting game design or monetization strategy.</li>\n<li>Connecting platform investments to measurable outcomes , such as improved onboarding, healthier communities, and better retention , by enabling teams with better data, tools, and workflows.</li>\n<li>Partnering deeply with engineering and technical leads to scope solutions, evaluate tradeoffs, and ensure systems are scalable, maintainable, and fit for long-term use.</li>\n<li>Collaborating with Arkheron&#39;s development team by advocating for platform capabilities that unlock options and flexibility, without prescribing creative or gameplay decisions.</li>\n<li>Designing for internal users by working hand-in-hand with design and UX to build tools that are intuitive, efficient, and enjoyable for developers to use.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of experience in product, analytics, or a data-driven role, with 2+ years shaping platform, infrastructure, or internal tooling products that support customer-facing outcomes.</li>\n<li>Think naturally in player journeys and customer interactions, while working on systems rather than direct game features.</li>\n<li>Use data to guide decisions, with strong instincts around metrics, telemetry, experimentation, and identifying meaningful signals versus noise.</li>\n<li>Bring hands-on experience with martech and player engagement systems (campaign engines, triggers, segmentation, attribution, messaging workflows).</li>\n<li>Understand modern community ecosystems (Discord integrations, bots, role systems) and how tooling choices shape player behavior and culture.</li>\n<li>Are fluent in engineering concepts and tradeoffs, comfortable discussing APIs, data pipelines, telemetry, and build-vs-buy decisions with technical partners.</li>\n<li>Balance competing inputs from game teams, publishing, analytics, and design , and can prioritize clearly when everything feels important.</li>\n<li>Communicate with clarity and empathy, building strong relationships across disciplines.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with data visualization tools and Implemented data-driven decision-making processes.</li>\n<li>Strong knowledge of game development principles and practices.</li>\n<li>Familiarity with cloud-based services and Infrastructure-as-a-Service (IaaS) providers.</li>\n<li>Excellent problem-solving skills and ability to work collaboratively in a fast-paced environment.</li>\n</ul>\n<p><strong>What We Offer</strong></p>\n<ul>\n<li>Competitive salary range: $196,500 - $247,133.</li>\n<li>Equity in the company.</li>\n<li>Full benefits package.</li>\n<li>Extra perks to make work (and life) better.</li>\n</ul>\n<p><strong>About Us</strong></p>\n<p>We&#39;re a game development company creating the original IP Arkheron. We&#39;re passionate about building a game that we&#39;re proud to play every day. We keep fun at the core and stay truly independent, with decisions driven by the team , not by investors or a board. We thrive in a culture of passion, trust, and shared ownership; where transparency matters and egos don&#39;t.</p>\n<p><strong>Get a Feel for What It&#39;s Like to Work Here</strong></p>\n<p>You can check out more about our culture, team, benefits, and perks at www.bonfirestudios.com.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6f1b9b08-689","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bonfire Studios","sameAs":"https://www.bonfirestudios.com","logo":"https://logos.yubhub.co/bonfirestudios.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/bonfirestudiosinc/jobs/4075212009","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$196,500 - $247,133","x-skills-required":["product management","analytics","data-driven decision-making","martech","player engagement systems","community ecosystems","engineering concepts","APIs","data pipelines","telemetry","build-vs-buy decisions"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:17.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid, Irvine, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product management, analytics, data-driven decision-making, martech, player engagement systems, community ecosystems, engineering concepts, APIs, data pipelines, telemetry, build-vs-buy decisions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":196500,"maxValue":247133,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5ecff17-84f"},"title":"Senior Forward Deployed Engineer (AI Agent) - UK","description":"<p>Join us on this thrilling journey to revolutionise the workforce with AI.</p>\n<p>The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop, configure, deploy, and optimise AI agents using Cresta’s AI platform and tools.</li>\n<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>\n<li>Optimise AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>\n<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>\n<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>\n<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>\n<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximise customer ROI.</li>\n<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>\n<li>3+ years of experience of full time working experience in software development/ consulting, AI/ML engineering, or system integration, or as FDE.</li>\n<li>Proficiency in Python and Golang, with the ability to write clean, efficient code.</li>\n<li>Familiarity with AI/ML concepts. Hands-on experience with large language models (LLMs), and prompt engineering techniques are strongly preferred.</li>\n<li>Strong understanding of general AI agent frameworks, function calling, and retrieval-augmented generation (RAG). Hands-on experience of building such a system is strongly preferred.</li>\n<li>Experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerisation, monitoring).</li>\n<li>Hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</li>\n<li>Excellent communication and project management skills.</li>\n<li>Ability to use data-driven decision-making, including A/B testing and performance monitoring, to refine solutions.</li>\n<li>You thrive in cross-functional environments, working hand-in-hand with PMs and engineers to turn real customer problems into scalable AI solutions.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5ecff17-84f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5097513008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Golang","AI/ML","Large Language Models (LLMs)","Prompt engineering","General AI agent frameworks","Function calling","Retrieval-augmented generation (RAG)","Cloud platforms","DevOps practices","APIs","Webhooks","Data pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:26:09.346Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Golang, AI/ML, Large Language Models (LLMs), Prompt engineering, General AI agent frameworks, Function calling, Retrieval-augmented generation (RAG), Cloud platforms, DevOps practices, APIs, Webhooks, Data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_39c59172-1f6"},"title":"Senior Forward Deployed Engineer (AI Agent) - Germany","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.\nThe AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Develop, configure, deploy, and optimize AI agents using Cresta’s AI platform and tools.</li>\n<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>\n<li>Optimize AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>\n<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>\n<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>\n<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>\n<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximize customer ROI.</li>\n<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>\n<li>3+ years of experience of full time working experience in software development/ consulting, AI/ML engineering, or system integration, or as FDE</li>\n<li>Proficiency in Python and Golang, with the ability to write clean, efficient code.</li>\n<li>Familiarity with AI/ML concepts. Hands-on experience with large language models (LLMs), and prompt engineering techniques are strongly preferred.</li>\n<li>Strong understanding of general AI agent frameworks, function calling, and retrieval-augmented generation (RAG). Hands-on experience of building such a system is strongly preferred.</li>\n<li>Experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerization, monitoring).</li>\n<li>Hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</li>\n<li>Excellent communication and project management skills.</li>\n<li>Ability to use data-driven decision-making, including A/B testing and performance monitoring, to refine solutions.</li>\n<li>You thrive in cross-functional environments, working hand-in-hand with PMs and engineers to turn real customer problems into scalable AI solutions.</li>\n</ul>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_39c59172-1f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5137369008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Golang","Large Language Models (LLMs)","AI/ML concepts","AI agent frameworks","function calling","retrieval-augmented generation (RAG)","cloud platforms","DevOps practices","APIs","webhooks","data pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:47.242Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany (Hybird)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Golang, Large Language Models (LLMs), AI/ML concepts, AI agent frameworks, function calling, retrieval-augmented generation (RAG), cloud platforms, DevOps practices, APIs, webhooks, data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a8a5ce4-a54"},"title":"Senior Forward Deployed Engineer (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionise the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop, configure, deploy, and optimise AI agents using Cresta’s AI platform and tools.</li>\n<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>\n<li>Optimise AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>\n<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>\n<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>\n<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>\n<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximise customer ROI.</li>\n<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>\n</ul>\n<p>What We&#39;re Looking For:</p>\n<ul>\n<li>Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>\n<li>Experience: 3+ years of full-time working experience in software development/Consulting, AI/ML engineering, or Forward Deployed Engineering.</li>\n<li>Programming Skills: Proficiency in Python and Golang, with the ability to write clean, efficient code.</li>\n<li>AI/ML Knowledge: Familiarity with AI/ML concepts. Hands-on experience with large language models (LLMs), and prompt engineering techniques are strongly preferred.</li>\n<li>AI Agent Frameworks: Strong understanding of general AI agent frameworks, function calling, and retrieval-augmented generation (RAG). Hands-on experience of building such a system is strongly preferred.</li>\n<li>Cloud &amp; DevOps: Experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerisation, monitoring).</li>\n<li>Integration Expertise: Hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</li>\n<li>Communication: Excellent communication and project management skills.</li>\n<li>Analytical Approach: Ability to use data-driven decision-making, including A/B testing and performance monitoring, to refine solutions.</li>\n<li>Collaborative Builder: You thrive in cross-functional environments, working hand-in-hand with PMs and engineers to turn real customer problems into scalable AI solutions.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a8a5ce4-a54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4595480008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Golang","Large Language Models (LLMs)","AI Agent Systems","Cloud Platforms (AWS, GCP, or Azure)","DevOps Practices (CI/CD, containerisation, monitoring)","APIs","Databases","CRMs","Data Pipelines"],"x-skills-preferred":["Prompt Engineering Techniques","Retrieval-Augmented Generation (RAG)","Function Calling","General AI Agent Frameworks"],"datePosted":"2026-04-17T12:25:28.216Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Golang, Large Language Models (LLMs), AI Agent Systems, Cloud Platforms (AWS, GCP, or Azure), DevOps Practices (CI/CD, containerisation, monitoring), APIs, Databases, CRMs, Data Pipelines, Prompt Engineering Techniques, Retrieval-Augmented Generation (RAG), Function Calling, General AI Agent Frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62f166c0-970"},"title":"Senior Forward Deployed Engineer (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As an AI Agent Engineer, you&#39;ll be at the forefront of deploying AI agents that address real-world challenges. In this role, you will work closely with customers as well as our software and machine learning engineers, ensuring high-impact AI Agent deployments and contributing to the continuous improvement of our core AI platform. You’ll develop intelligent AI agents, integrate them seamlessly with external systems and offer hands-on technical expertise to ensure successful deployments.</p>\n<p>This position requires strong engineering skills, adaptability, and customer engagement. If you are self-driven, analytical, and eager to leverage AI in practical applications, this role is for you.</p>\n<p>Our team is looking for someone with a strong background in software development, AI/ML engineering, or forward deployed engineering. You should have experience with cloud platforms (AWS, GCP, or Azure) and DevOps practices (CI/CD, containerization, monitoring). Additionally, you should have hands-on experience with integrating systems via APIs, webhooks, and data pipelines.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Develop, configure, deploy, and optimize AI agents using Cresta’s AI platform and tools.</li>\n<li>Build AI agent integrations with external systems (APIs, databases, CRMs, etc.) to ensure seamless workflow integration.</li>\n<li>Optimize AI agent performance (e.g. fine-tune prompts and configurations) and troubleshoot issues in complex enterprise environments.</li>\n<li>Collaborate with customers and internal stakeholders to gather technical requirements and translate business needs into AI Agent solutions.</li>\n<li>Conduct interactive demos and present compelling proof-of-concepts to prospective customers, proactively gather feedback, and iteratively refine solutions to meet objectives.</li>\n<li>Define project milestones, create implementation plans, and coordinate execution with internal teams to ensure on-time delivery. Provide a tight feedback loop to our product and engineering teams , identifying gaps, building custom tooling, and influencing the roadmap through real-world deployment learnings.</li>\n<li>Collaborate with PMs to define agent goals, iterate rapidly based on customer feedback, and shape product capabilities that maximize customer ROI.</li>\n<li>Serve as a trusted technical advisor for the customer, guiding best practices for AI agent adoption and usage. Provide technical guidance on AI agent best practices, including architecture design, security considerations, and scalability planning.</li>\n</ul>\n<p>We offer a comprehensive and people-first benefits package to support you at work and in life:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life</li>\n<li>Flexible vacation time to promote a healthy work-life blend</li>\n<li>Paid parental leave to support you and your family</li>\n</ul>\n<p>Cresta’s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</p>\n<p>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62f166c0-970","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5107283008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Golang","AI/ML","Large Language Models","AI Agent systems","Cloud platforms","DevOps practices","APIs","webhooks","data pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:15.087Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Australia (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Golang, AI/ML, Large Language Models, AI Agent systems, Cloud platforms, DevOps practices, APIs, webhooks, data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6dcdf4da-523"},"title":"Financial Data Analyst","description":"<p>We&#39;re looking for a Financial Data Analyst to help us unlock the full potential of our financial data by improving reporting, analytics, and automation.</p>\n<p>As a Financial Data Analyst, you will be responsible for pulling, analysing, and structuring financial data from various sources to generate actionable insights. Over time, this role will evolve from report generation to building automation and integrations between Belong&#39;s management system and accounting system.</p>\n<p>This role is ideal for someone who loves working with data, has a strong analytical mindset, and enjoys solving problems through data engineering and automation. You don&#39;t just pull reports,you understand the story behind the numbers and can translate raw data into meaningful business insights.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Extract and consolidate financial data from sources like BigQuery, RDS, Excel, Google Sheets, and other internal systems.</li>\n<li>Build actionable reports and dashboards in Looker, Metabase, Google Sheets, and Excel.</li>\n<li>Develop and maintain SQL queries to efficiently retrieve financial data.</li>\n<li>Analyse financial metrics, including revenue categorisation, cohort analysis, and gross profit calculations.</li>\n<li>Identify trends, anomalies, and insights to support strategic decision-making.</li>\n<li>Automate data retrieval processes and reporting workflows over time.</li>\n<li>Build and improve integrations between Belong&#39;s management and accounting systems.</li>\n<li>Partner with Finance &amp; Accounting to enhance financial reporting and reconciliation processes.</li>\n<li>Provide ad hoc financial analysis and data support for forecasting and planning.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6dcdf4da-523","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Belong","sameAs":"https://www.belong.com/","logo":"https://logos.yubhub.co/belong.com.png"},"x-apply-url":"https://jobs.lever.co/belong/f00d7d9d-02fb-46d1-a523-9012c2a7a569","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Excel/Google Sheets","Python","Looker/Metabase","BigQuery/RDS"],"x-skills-preferred":["Experience automating financial workflows and data pipelines","Knowledge of accounting systems and ERP platforms","Familiarity with AI-driven data automation and analytics"],"datePosted":"2026-04-17T12:23:30.145Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Buenos Aires"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Excel/Google Sheets, Python, Looker/Metabase, BigQuery/RDS, Experience automating financial workflows and data pipelines, Knowledge of accounting systems and ERP platforms, Familiarity with AI-driven data automation and analytics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b8499f8-a97"},"title":"Member of Compliance, Financial Crimes Compliance Data Analytics","description":"<p>At Anchorage Digital, we are seeking a highly motivated and intellectually curious Member of Compliance, Financial Crimes Compliance Data Analytics with a strong data analysis background.</p>\n<p>As a vital member of the Compliance team, you will have the opportunity to support the design, implementation, and optimization of compliance programs across all applicable Anchorage Digital legal entities.</p>\n<p>You will work closely with various compliance functions, particularly Financial Crimes Compliance, to drive efficiency and effectiveness within the program.</p>\n<p>Your expertise will be critical in transforming raw data into actionable insights, driving process improvements, and leveraging technology to enhance our overall compliance posture.</p>\n<p>This role is ideal for a proactive and &#39;young and hungry&#39; technologist who thrives on solving complex problems in a dynamic regulatory environment.</p>\n<p>You&#39;ll gain deep exposure to diverse compliance domains and have the chance to apply your data expertise to strengthen Anchorage Digital&#39;s global compliance function through analytics, automation, and the strategic use of AI tools.</p>\n<p>It is important that you are well-organized, have a strong analytical background, can effectively manage competing priorities, and can adapt to rapid change in a fast-paced environment.</p>\n<p>If you thrive under uncertainty and are motivated to excel in a dynamic environment with competing priorities, this role is designed for you.</p>\n<p>Anchorage Digital values individuals who are proactive, detail-oriented, and innovative.</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Expert in Compliance-related data tables and models, understanding the nuances of the data, the underlying codes, and the limitations of the models.</li>\n</ul>\n<ul>\n<li>Work with stakeholders to drive the automation of key compliance processes and workflows using internal tools, such as Know-Your-Customer, sanctions screening, suspicious activity identification and reporting to improve efficiency and reduce manual effort.</li>\n</ul>\n<ul>\n<li>Experience experimenting with and deploying AI solutions.</li>\n</ul>\n<p><strong>Complexity and Impact of Work:</strong></p>\n<ul>\n<li>Support the development and enhancement of BSA/AML models, including transaction monitoring, sanctions screening, customer risk rating, blockchain analytics, and other relevant BSA/AML tools.</li>\n</ul>\n<ul>\n<li>Contribute to process improvements and best practices for the FCC Analytics team, including code review, testing frameworks, project management, and documentation.</li>\n</ul>\n<ul>\n<li>Conduct ad hoc analyses and respond to time-sensitive data requests from auditors, regulators, or other Compliance teams with accuracy and speed.</li>\n</ul>\n<p><strong>Organizational Knowledge:</strong></p>\n<ul>\n<li>Develop deep understanding of Anchorage Digital&#39;s business model across custody, staking, stablecoins, and trading, and how data flows through these domains and the compliance models.</li>\n</ul>\n<ul>\n<li>Coordinate with cross-functional teams (Product, Engineering, etc.) to implement and improve tools and processes within the broader Compliance department.</li>\n</ul>\n<ul>\n<li>Understand how the Compliance and FCC team fits within the broader organizational structure, align work with team and company priorities.</li>\n</ul>\n<p><strong>Communication and Influence:</strong></p>\n<ul>\n<li>Manage competing priorities across strategic projects and urgent ad-hoc requests; ask clarifying questions to scope ambiguous requests and push back constructively when requirements are unclear, proposing alternative approaches.</li>\n</ul>\n<ul>\n<li>Present findings and data insights with appropriate context, visual aids, and tailored communication style</li>\n</ul>\n<p><strong>You may be a fit for this role if you have:</strong></p>\n<ul>\n<li>Experience: 2–3 years of experience in a data analytics or data science role, with a strong understanding of blockchain, cryptocurrency, and the financial services industry.</li>\n</ul>\n<ul>\n<li>Technical Proficiency: Demonstrated ability to operate autonomously to manage multiple competing priorities.</li>\n</ul>\n<ul>\n<li>Automation &amp; AI Aptitude: Experience with or a strong interest in automation principles, leveraging existing AI tools, and exploring new AI tools (such as AI Agents) to enhance productivity.</li>\n</ul>\n<ul>\n<li>Technologist Mindset: While not necessarily an engineer, a strong understanding of how systems, data flows, and technologies interact is essential.</li>\n</ul>\n<ul>\n<li>Eagerness to learn and apply new technologies.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if:</strong></p>\n<ul>\n<li>Prior experience in the financial service industry, crypto industry, start-up, or a fast-paced, evolving environment where you&#39;ve worn multiple hats and adapted quickly;</li>\n</ul>\n<ul>\n<li>General understanding of regulatory compliance, financial crimes, and risk management.</li>\n</ul>\n<ul>\n<li>Experience with data transformation tools like dbt or similar; familiarity with version control (Git) and software engineering best practices for analytics</li>\n</ul>\n<ul>\n<li>Python or other scripting languages for data analysis, automation, or data pipeline work</li>\n</ul>\n<ul>\n<li>Excellent communication skills with experience presenting technical findings to both technical and non-technical stakeholders</li>\n</ul>\n<ul>\n<li>Proven ability to manage competing priorities and deliver accurate results under tight deadlines, especially for compliance or audit-related requests</li>\n</ul>\n<ul>\n<li>Detail-oriented and quality-focused with a commitment to data accuracy, testing, and documentation</li>\n</ul>\n<ul>\n<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system.</li>\n</ul>\n<p><strong>Additional Information About Anchorage Digital:</strong></p>\n<p>Who we are</p>\n<p>The Anchorage Village, what we call our team, brings together the brightest minds from platform security, financial services, and distributed ledger technology.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b8499f8-a97","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/7a3e7cce-a01b-419a-a3f1-753466ae8bf3","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Compliance-related data tables and models","Know-Your-Customer","sanctions screening","suspicious activity identification and reporting","BSA/AML models","transaction monitoring","customer risk rating","blockchain analytics","data transformation tools","version control","software engineering best practices","Python","scripting languages","data analysis","automation","data pipeline work"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:18:32.047Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Compliance-related data tables and models, Know-Your-Customer, sanctions screening, suspicious activity identification and reporting, BSA/AML models, transaction monitoring, customer risk rating, blockchain analytics, data transformation tools, version control, software engineering best practices, Python, scripting languages, data analysis, automation, data pipeline work"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8d242bab-985"},"title":"Technical Program Manager, Agentic Development Platform (Modeling & Evals)","description":"<p>We are looking for a Technical Program Manager to lead critical initiatives across Modeling, Data, Evaluations, and User Signals for the Antigravity team. You will play a key role in enhancing our models and product by managing the end-to-end lifecycle of data contributions, model development, evaluation processes, and feedback loops.</p>\n<p>This role involves close collaboration with research teams, managing custom model pipelines, analyzing user signals from multiple sources, and overseeing vendor-based testing.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Drive the roadmap on data, evaluations, and modeling improvements to core models, features, and new use cases in collaboration with the Antigravity research teams.</li>\n<li>Manage the evaluation process for new and existing models, and provide feedback to the modeling and research teams.</li>\n<li>Partner with modeling teams to ensure seamless handoffs and coordination of data and evaluation analysis.</li>\n<li>Manage approval processes working closely with the research and engineering teams as well as cross-functional stakeholders to successfully develop and launch models.</li>\n<li>Establish and refine systems for collecting, triaging, and analyzing both internal and external user feedback to ensure resolution of high-priority issues.</li>\n<li>Coordinate with vendors for product testing and report on key findings to the engineering and product teams.</li>\n<li>Manage compute resources for modeling efforts and support team infrastructure needs.</li>\n<li>Act as a point of contact for resolving technical issues for the team.</li>\n</ul>\n<p><strong>About You</strong></p>\n<p>To be successful as a Technical Program Manager at DeepMind, we look for the following skills and experience:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, a related technical field, or equivalent practical experience.</li>\n<li>5 years of experience in a technical program management role in a research environment.</li>\n<li>Experience working with machine learning models, data pipelines, and evaluation processes.</li>\n<li>Strong analytical skills and experience with data analysis.</li>\n</ul>\n<p>In addition, the following would be an advantage:</p>\n<ul>\n<li>Master’s degree or PhD in Computer Science or a related technical field.</li>\n<li>8+ years of relevant work experience in a technical environment.</li>\n<li>Experience working on end-to-end model flywheel processes, including data collection strategies, model evaluation techniques, and metrics.</li>\n<li>Experience working with modeling research teams, including managing model training and deployment processes.</li>\n<li>Proven ability to lead complex projects with cross-team stakeholders, influencing and leading without managerial authority.</li>\n<li>Excellent interpersonal and communication skills, with a demonstrated ability to work effectively in ambiguous, fast-paced R&amp;D environments.</li>\n</ul>\n<p>The US base salary range for this full-time position is between $156,000 - $229,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8d242bab-985","directApply":true,"hiringOrganization":{"@type":"Organization","name":"DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7477606","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$156,000 - $229,000 + bonus + equity + benefits","x-skills-required":["Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience","5 years of experience in a technical program management role in a research environment","Experience working with machine learning models, data pipelines, and evaluation processes","Strong analytical skills and experience with data analysis","Master’s degree or PhD in Computer Science or a related technical field"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:27:13.970Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience, 5 years of experience in a technical program management role in a research environment, Experience working with machine learning models, data pipelines, and evaluation processes, Strong analytical skills and experience with data analysis, Master’s degree or PhD in Computer Science or a related technical field","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":156000,"maxValue":229000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f19254b6-7fd"},"title":"SWE - Grids - Fixed Term Contract - 6 Months - London, UK","description":"<p>We are seeking an experienced Software Engineer for a fixed-term contract to join the Energy Grids team at Google DeepMind. You will work at the cutting edge of power systems and machine learning, developing and deploying innovative AI solutions to optimize the operation of electrical power grids.</p>\n<p>Your key responsibilities will include:</p>\n<p>Designing, implementing, and maintaining robust and reliable systems and workflows for generating large-scale synthetic and real datasets of power grid optimization problems.</p>\n<p>Designing and implementing rigorous unit, integration, and system tests to ensure the reliability, accuracy, and maintained performance of our models and software, with a focus on data pipelines.</p>\n<p>Maintaining and contributing to our machine learning codebase, ensuring efficient data structures and seamless integration with our power system models and optimization solvers.</p>\n<p>Ensuring the codebase supports ongoing experimentation, while simultaneously increasing scalability, robustness, and reliability via improved integration testing and performance benchmarking.</p>\n<p>Working closely and collaboratively with a team of engineers, research scientists, and product managers to deliver real-world impact.</p>\n<p>To be successful in this role, you will need:</p>\n<p>A Bachelor&#39;s degree in Computer Science, Software Engineering, or equivalent practical experience.</p>\n<p>Excellent proficiency in C++, Python, or Jax.</p>\n<p>Demonstrated experience developing or utilizing solutions for robustness or quality assurance within software and/or ML systems.</p>\n<p>Experience processing, generating, and analyzing large-scale data, e.g. for ML applications.</p>\n<p>Proven ability to discuss technical ideas effectively and collaborate in interdisciplinary teams.</p>\n<p>Motivated by the prospect of real-world impact and focused on excellence in software development.</p>\n<p>Preferred qualifications include experience with Google&#39;s technical stack and/or Google Cloud Platform (GCP), familiarity with modern hardware accelerators (GPU / TPU), experience with modern ML training frameworks, such as Jax, and experience in developing software in a translational research or production setting.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f19254b6-7fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7750738","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"contract","x-salary-range":null,"x-skills-required":["C++","Python","Jax","Machine Learning","Software Development","Data Analysis","Data Pipelines"],"x-skills-preferred":["Google Cloud Platform (GCP)","Modern Hardware Accelerators (GPU / TPU)","Modern ML Training Frameworks (Jax)","Translational Research or Production Setting"],"datePosted":"2026-03-31T18:25:47.178Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, Jax, Machine Learning, Software Development, Data Analysis, Data Pipelines, Google Cloud Platform (GCP), Modern Hardware Accelerators (GPU / TPU), Modern ML Training Frameworks (Jax), Translational Research or Production Setting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e311ebae-d0f"},"title":"Product Manager - User Understanding","description":"<p>Unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the chance to enjoy and be passionate about these creators.</p>\n<p>The Subscriptions Mission builds and evolves Spotify&#39;s subscription products and marketplace experiences to drive sustainable user and revenue growth globally. We focus on awareness, acquisition, activation, retention, and monetization strategies that help users unlock the full value of Spotify while enabling the business to scale efficiently and responsibly.</p>\n<p>User Understanding sits within Subscriptions and focuses on building the intelligence layer that makes our growth efforts smarter,through decisioning, data signals, and ML models that power personalization across surfaces and lifecycle moments. You&#39;ll work at the intersection of machine learning innovation and commercial strategy, partnering closely with ML engineers, data scientists, data platform teams, product insights, and business stakeholders to shape high-impact growth initiatives.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Define the strategy and roadmap for key subscriber growth initiatives, focusing on applying AI/ML in the right places balancing business goals, ROI and technical feasibility</li>\n<li>Deeply understand business metrics and user behaviour, leading to well-researched hypotheses for testing and future AI/ML development</li>\n<li>Lead end-to-end product development from problem definition through system deployment and adoption, working across research, engineering, and business teams</li>\n<li>Drive experimentation programs that rigorously test ML-powered features against baselines, making data-informed decisions about when to scale AI-driven experiences</li>\n<li>Collaborate with engineers and data scientists on technical decisions to drive innovation and impact</li>\n<li>Build products at scale that impact hundreds of millions of users across diverse markets, considering localization, infrastructure constraints, and varying user contexts</li>\n<li>Stay current with AI and ML technology trends, looking for opportunities to match with key business objectives</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years product management experience with proven track record launching ML/data-driven products at scale</li>\n<li>Strong technical fluency in data science, recommender systems, machine learning concepts, model evaluation, A/B testing, and data pipelines. You can review and challenge technical decisions.</li>\n<li>Strategic judgment about when to apply ML vs. GenAI vs. other tools and you understand the cost-benefit tradeoffs of complex systems</li>\n<li>Understand subscription business and e-commerce fundamentals in consumer products</li>\n<li>Excellent communication skills, able to collaborate cross-functionally and communicate complex ideas to stakeholders at all levels</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e311ebae-d0f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/d30af7c7-67aa-404a-a1a5-b8b3838e08c3","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data science","recommender systems","machine learning","model evaluation","A/B testing","data pipelines"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:24:00.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data science, recommender systems, machine learning, model evaluation, A/B testing, data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68526074-073"},"title":"Software Engineer, Fee Insights","description":"<p>As a Software Engineer on Fee Insights, you&#39;ll work on crafting billing-related experiences for both internal teams and external merchants. You&#39;ll partner closely with product teams to build reporting, dashboards, insights, and guided user journeys that help users interact with pricing and understand fees.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Building and iterating on fee explainability experiences that empower users to find answers independently</li>\n<li>Developing agentic systems with strong accuracy guarantees,building evaluation frameworks and ensuring reliability at scale</li>\n<li>Creating performant data aggregations and optimizations to handle complex pricing scenarios</li>\n<li>Working across the stack: backend services, frontend experiences, data pipelines, and AI agents</li>\n<li>Collaborating with stakeholders across engineering, product, operations, finance, accounting, and sales</li>\n<li>Independently identifying solutions to user pain points and executing against roadmaps</li>\n<li>Mentoring junior engineers and raising the technical bar for the team</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of professional experience in a software development role</li>\n<li>Strong coding skills in any programming language</li>\n<li>Ability to drive and lead large-scale initiatives that solve critical business challenges</li>\n<li>Strong collaboration skills across teams and workstreams</li>\n<li>Track record of developing and delivering high-quality software using industry best practices</li>\n<li>Ability to make effective tradeoffs that balance business priorities, user experience, and technical sustainability</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68526074-073","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7436194","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software development","programming languages","agentic systems","data aggregations","optimizations","backend services","frontend experiences","data pipelines","AI agents"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:20:57.451Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, programming languages, agentic systems, data aggregations, optimizations, backend services, frontend experiences, data pipelines, AI agents"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d9dee25d-ca8"},"title":"Regulatory Reporting Program Manager, Stablecoin","description":"<p>As a Regulatory Reporting Program Manager, you will support the Global Regulatory Reporting team by partnering with Legal, Compliance, Accounting, Business/Product, and Data Analytics teams across Stripe to maintain Stripe&#39;s NORAM regulatory reporting program.</p>\n<p>This may include understanding and documenting the applicable regulatory reporting requirements in the region, implementing systems and processes for comprehensively tracking those requirements for each of Stripe&#39;s North American entities, maintaining the end-to-end processes for the collation of data, production of reports, and continuously monitoring compliance to meet the expectations of Stripe&#39;s regulators.</p>\n<p>You will need to be comfortable straddling both the technology and financial services worlds every day, enjoying the puzzle of dealing with that and seeking creative solutions and moving quickly, often in the face of ambiguity.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own end-to-end U.S. regulatory reporting program for digital assets including stablecoin related financial activities, including defining reporting scope, governance, timelines, and accountability across required regulatory filings.</li>\n<li>Interpret U.S. regulatory reporting requirements applicable to stablecoins, digital assets, payments, and custody, and translate them into clear reporting specifications, data definitions, and execution plans in partnership with Legal and Compliance.</li>\n<li>Manage the full regulatory reporting lifecycle, from data sourcing and aggregation through validation, internal review, sign-off, and timely submission to regulators.</li>\n<li>Ensure regulatory reports accurately reflect stablecoin-specific activities and risks, including issuance, redemption, circulation, reserves, custody arrangements, and transaction flows across on-chain and off-chain systems.</li>\n<li>Design and maintain a robust regulatory reporting control framework, including data quality checks, reconciliations, documentation, and issue remediation to support audit and exam readiness.</li>\n<li>Partner with Engineering, Data, Finance, Compliance and Legal to improve data lineage, transparency, and automation across regulatory reporting processes as the business scales.</li>\n<li>Own regulatory reporting change management, including assessing the impact of new or evolving stablecoin regulations, product launches, and system changes on reporting scope, data requirements, and controls.</li>\n<li>Develop and maintain regulator-ready documentation, including reporting methodologies, assumptions, data lineage, and process documentation to support supervisory reviews and examinations.</li>\n<li>Serve as the primary point of contact for regulatory reporting matters during U.S. regulatory exams, audits, and regulatory inquiries.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d9dee25d-ca8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7650177","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["U.S. compliance and regulatory obligations","Stablecoin issuance","Payments","Custody-related activities","Regulatory reporting requirements","Data sourcing and aggregation","Validation","Internal review","Sign-off","Timely submission to regulators","Data quality checks","Reconciliations","Documentation","Issue remediation","Data lineage","Transparency","Automation","Regulatory reporting change management","Regulator-ready documentation"],"x-skills-preferred":["Stablecoins","Digital assets","Fintech platforms","Regulatory reporting for banks","Trust companies","Payment institutions","Money services businesses","Reporting automation","Data pipelines","Reporting tools"],"datePosted":"2026-03-31T18:17:10.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"SEA, SF"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"U.S. compliance and regulatory obligations, Stablecoin issuance, Payments, Custody-related activities, Regulatory reporting requirements, Data sourcing and aggregation, Validation, Internal review, Sign-off, Timely submission to regulators, Data quality checks, Reconciliations, Documentation, Issue remediation, Data lineage, Transparency, Automation, Regulatory reporting change management, Regulator-ready documentation, Stablecoins, Digital assets, Fintech platforms, Regulatory reporting for banks, Trust companies, Payment institutions, Money services businesses, Reporting automation, Data pipelines, Reporting tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8e45304c-f00"},"title":"Engineering Manager, Connect","description":"<p>We&#39;re looking for an experienced Engineering Manager to lead our Connect team. As an Engineering Manager at Stripe, you will play a pivotal role in driving the success of your team and the broader organisation. You will lead a distributed full-stack engineering team working across APIs, frontend surfaces, and data pipelines that directly impact platform satisfaction and retention.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Leading engineering and design processes, coaching, mentoring, and supporting your team as they build innovative solutions.</li>\n<li>Collaborating with stakeholders across product, design, infrastructure, marketing, and operations to ensure alignment on projects and objectives and shaping the team’s strategy and roadmap.</li>\n<li>Owning problems from end-to-end by managing diverse systems, processes, and technologies, while continuously seeking opportunities to optimise efficiency and user experience.</li>\n<li>Upholding high engineering standards, ensuring consistency across codebases, and implementing best practices throughout your team.</li>\n<li>Recruiting top engineering talent in partnership with Stripe’s recruiting team to build a diverse and high-performing team.</li>\n<li>Supporting your team members&#39; career growth by providing insights and development opportunities tailored to their goals.</li>\n</ul>\n<p>Minimum Requirements:</p>\n<ul>\n<li>5+ years of engineering management experience, directly managing and growing teams of 5+ engineers focused on building and shipping products at scale.</li>\n<li>8+ years of full-time software development experience.</li>\n<li>Experience building extensible, leveraged software solutions that scale to different kinds of user needs, with the ability to empathise with users and advocate for exceptional user experiences.</li>\n<li>Strong problem-solving skills, with a history of creatively tackling complex challenges.</li>\n<li>Background in dealing with ambiguity and executing on multiple high-impact work streams simultaneously.</li>\n<li>Commitment to fostering a supportive, feedback-driven, and challenging work environment.</li>\n<li>Technical proficiency to engage thoughtfully with engineers about architecture and product design, offering valuable insights when necessary.</li>\n<li>A mindset of autonomy and entrepreneurship, where you&#39;re excited to take responsibility and drive initiatives forward.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8e45304c-f00","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7762324","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","full-stack engineering","APIs","frontend surfaces","data pipelines","problem-solving","team leadership","recruitment","career development"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:10:17.515Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, full-stack engineering, APIs, frontend surfaces, data pipelines, problem-solving, team leadership, recruitment, career development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b762fdc-fb5"},"title":"Data Analyst, Intern (Master's degree)","description":"<p>We are seeking a Data Analyst Intern to join our team in Toronto, Ontario, Canada. As a Data Analyst Intern, you will work on meaningful business initiatives that will grow the GDP of the internet. You will partner closely with Data Scientists, Data Analysts, and business partners to drive business impact through rigorous analytical solutions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Apply machine learning, causal inference, or advanced analytics on large datasets to measure results and outcomes, identify causal impact and attribution, and predict the future performance of users or products to drive business success.</li>\n<li>Influence business actions and strategy by developing actionable insights through metrics and dashboards.</li>\n<li>Drive the collection of new data and the refinement of existing data sources.</li>\n<li>Learn quickly by asking great questions, finding how to work with your mentor and teammates effectively, and communicating the status of your work clearly.</li>\n<li>Present your work to the Data Science team, partner teams, and fellow interns.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Enrolled in a quantitative Master&#39;s degree program (e.g. Data Analytics, Statistics, Economics, Mathematics, etc.) with the expectation of graduating in winter 2026 or spring/summer 2027.</li>\n<li>Experience with a scientific computing language (such as Python, R, etc) and SQL.</li>\n<li>Experience communicating and collaborating with multidisciplinary stakeholders in a team environment.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience writing and debugging data pipelines.</li>\n<li>Demonstrated ability to evaluate and receive feedback from mentors, peers, and stakeholders via experience from previous internships or other multi-person projects.</li>\n<li>Ability to learn new systems and form an understanding of those systems, through independent research and working with a mentor and subject matter experts.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b762fdc-fb5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7285986","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Python","R","SQL","Machine Learning","Causal Inference","Advanced Analytics"],"x-skills-preferred":["Data Pipelines","Feedback Evaluation","System Learning"],"datePosted":"2026-03-31T18:09:43.225Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, R, SQL, Machine Learning, Causal Inference, Advanced Analytics, Data Pipelines, Feedback Evaluation, System Learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5f31c15e-6f9"},"title":"Data Analyst","description":"<p>Job Title: Data Analyst</p>\n<p>Role Overview:</p>\n<p>As a Data Analyst at Stripe, you will partner with teams across the company to ensure that our users, products, and business have the models, data products, and insights needed to make decisions and grow responsibly.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Work closely with partners to extract insights from Stripe&#39;s rich and complex data</li>\n<li>Translate business needs into data problems</li>\n<li>Build metrics, scalable data pipelines, dashboards, and reports to inform and run the business</li>\n<li>Deliver actionable business recommendations through analyses and data storytelling</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>MS/MA + 2 years or BS/BA + 3 years of full-time experience in Business Intelligence Engineering, Data Analyst, Business Analyst roles</li>\n<li>Proficiency in SQL</li>\n<li>Proven ability to manage and deliver on multiple projects with great attention to detail</li>\n<li>Ability to clearly communicate results and drive impact</li>\n<li>Experience collaborating with cross-functional teams to deliver strategic insights, benchmarks, and analyses that provide recommendations</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Opportunity to work with a vibrant community of data analysts and data scientists</li>\n<li>Variety of Data Analytics roles and teams across Stripe</li>\n<li>Alignment with the most relevant team based on background</li>\n</ul>\n<p>Note: The preferred qualifications are a bonus, not a requirement.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5f31c15e-6f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/5416444","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data pipelines","dashboards","reports","data storytelling"],"x-skills-preferred":["distributed data frameworks like Spark","Python","statistical knowledge","development processes and best practices"],"datePosted":"2026-03-31T18:07:44.053Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data pipelines, dashboards, reports, data storytelling, distributed data frameworks like Spark, Python, statistical knowledge, development processes and best practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_187c9aa5-4ac"},"title":"Backend / API Engineer, Privacy Products","description":"<p>We&#39;re seeking a skilled Backend / API Engineer to join our Privacy Products team. As a key member of this team, you&#39;ll design, build, and extend external-facing privacy products, including the data access tool and privacy portal. You&#39;ll also collaborate with cross-functional teams to extend our privacy systems and ensure compliance with industry regulations.</p>\n<p>Responsibilities:\nDesign, build, and extend external-facing privacy products such as the data access tool and privacy portal\nBuild API products for customers to help them manage their privacy requirements\nBuild internal tools for other Stripe teams to help manage their privacy requirements\nBuild and extend our data access and deletion pipelines\nCollaborate with our users and on cross-functional teams to extend our privacy systems</p>\n<p>Requirements:\n6+ years of professional hands-on software development experience\nEmpathy, strong communication skills, and a deep respect for the power of collaboration\nAble to work well individually, cross-team, and cross-functionally\nThe ability to drive clear next steps when encountering ambiguous spaces without clear lines of ownership\nExcellent problem-solving skills and attention to detail\nHigh standards for code quality and a constructive attitude to help others raise the bar</p>\n<p>Preferred qualifications:\nExperience with privacy regulations (GDPR, CCPA, etc.) and implementing technical solutions to address them\nExperience with complex data pipelines over large datasets\nExperience designing and building user-facing privacy tools\nExperience with Ruby or Java in production environments</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_187c9aa5-4ac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7579264","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["API","Backend","Software Development","Privacy Regulations","Data Pipelines","Ruby","Java"],"x-skills-preferred":["GDPR","CCPA","Complex Data Pipelines","User-Facing Privacy Tools"],"datePosted":"2026-03-31T18:05:28.416Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"API, Backend, Software Development, Privacy Regulations, Data Pipelines, Ruby, Java, GDPR, CCPA, Complex Data Pipelines, User-Facing Privacy Tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_75433770-6b1"},"title":"Technical Program Manager, Agentic Development Platform (Modeling & Evals)","description":"<p>At DeepMind, we&#39;re seeking a Technical Program Manager to lead critical initiatives across Modeling, Data, Evaluations, and User Signals for the Antigravity team. As a Technical Program Manager, you will play a key role in enhancing our models and product by managing the end-to-end lifecycle of data contributions, model development, evaluation processes, and feedback loops.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Driving the roadmap on data, evaluations, and modeling improvements to core models, features, and new use cases in collaboration with the Antigravity research teams.</li>\n<li>Managing the evaluation process for new and existing models, and providing feedback to the modeling and research teams.</li>\n<li>Partnering with modeling teams to ensure seamless handoffs and coordination of data and evaluation analysis.</li>\n<li>Managing approval processes working closely with the research and engineering teams as well as cross-functional stakeholders to successfully develop and launch models.</li>\n<li>Establishing and refining systems for collecting, triaging, and analyzing both internal and external user feedback to ensure resolution of high-priority issues.</li>\n<li>Coordinating with vendors for product testing and reporting on key findings to the engineering and product teams.</li>\n<li>Managing compute resources for modeling efforts and supporting team infrastructure needs.</li>\n<li>Acting as a point of contact for resolving technical issues for the team.</li>\n</ul>\n<p>To succeed as a Technical Program Manager at DeepMind, we look for the following skills and experience:</p>\n<ul>\n<li>A Bachelor&#39;s degree in Computer Science, a related technical field, or equivalent practical experience.</li>\n<li>5 years of experience in a technical program management role in a research environment.</li>\n<li>Experience working with machine learning models, data pipelines, and evaluation processes.</li>\n<li>Strong analytical skills and experience with data analysis.</li>\n</ul>\n<p>In addition, the following would be an advantage:</p>\n<ul>\n<li>A Master&#39;s degree or PhD in Computer Science or a related technical field.</li>\n<li>8+ years of relevant work experience in a technical environment.</li>\n<li>Experience working on end-to-end model flywheel processes, including data collection strategies, model evaluation techniques, and metrics.</li>\n<li>Experience working with modeling research teams, including managing model training and deployment processes.</li>\n<li>Proven ability to lead complex projects with cross-team stakeholders, influencing and leading without managerial authority.</li>\n<li>Excellent interpersonal and communication skills, with a demonstrated ability to work effectively in ambiguous, fast-paced R&amp;D environments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_75433770-6b1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"DeepMind","sameAs":"https://www.deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7477606","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$156,000 - $229,000 + bonus + equity + benefits","x-skills-required":["Computer Science","Machine Learning","Data Pipelines","Evaluation Processes","Analytical Skills","Data Analysis"],"x-skills-preferred":[],"datePosted":"2026-03-16T14:42:56.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Computer Science, Machine Learning, Data Pipelines, Evaluation Processes, Analytical Skills, Data Analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":156000,"maxValue":229000,"unitText":"YEAR"}}}]}