{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-streaming"},"x-facet":{"type":"skill","slug":"data-streaming","display":"Data Streaming","count":9},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2895081b-eab"},"title":"Sr. Specialist Solutions Architect","description":"<p>As a Sr. Specialist Solutions Architect, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark and expertise in other data technologies.</p>\n<p>Your responsibilities will include providing technical leadership to guide strategic customers to successful implementations on big data projects, architecting production-level data pipelines, becoming a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows, assisting Solution Architects with more advanced aspects of the technical sale, and contributing to the Databricks Community.</p>\n<p>To succeed in this role, you will need to have a strong background in software engineering and data engineering, with expertise in at least one of the following areas: software engineering/data engineering, data applications engineering, or deep specialty expertise in areas such as scaling big data workloads, migrating Hadoop workloads to the public cloud, or experience with large-scale data ingestion pipelines and data migrations.</p>\n<p>You will also need to have a bachelor&#39;s degree in computer science, information systems, engineering, or equivalent experience through work experience, production programming experience in SQL and Python, Scala, or Java, and 2 years of professional experience with Big Data technologies and architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2895081b-eab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499576002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Big Data technologies","Data engineering","Data lake technology","Data streaming","Data ingestion and workflows","Python","Scala","Java","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.553Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Big Data technologies, Data engineering, Data lake technology, Data streaming, Data ingestion and workflows, Python, Scala, Java, SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f80914c-588"},"title":"Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>\n<p>About Role</p>\n<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>\n<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>\n<p><strong>Responsibilities</strong></p>\n<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>\n<ul>\n<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>\n</ul>\n<ul>\n<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>\n</ul>\n<ul>\n<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>\n</ul>\n<ul>\n<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>\n</ul>\n<ul>\n<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>\n</ul>\n<ul>\n<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>\n</ul>\n<ul>\n<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>\n</ul>\n<p><strong>Key Qualifications</strong></p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems and databases.</li>\n</ul>\n<ul>\n<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>\n</ul>\n<ul>\n<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>\n</ul>\n<ul>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>\n</ul>\n<ul>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>\n</ul>\n<ul>\n<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>\n</ul>\n<ul>\n<li>Experience with ClickHouse is a plus.</li>\n</ul>\n<ul>\n<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>\n</ul>\n<ul>\n<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>\n<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>\n<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>\n<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>\n<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f80914c-588","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7267602","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Distributed systems","SQL","Database internals","Prometheus","Grafana","ClickHouse","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":["Data streaming technologies","API development","Infrastructure as Code tools","Graphql"],"datePosted":"2026-04-18T15:53:23.310Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1ba5c28-9ce"},"title":"Senior Software Engineer, Observability","description":"<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>\n<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>\n<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>\n<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>\n<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>\n<p>The base salary range for this role is $139,000 to $220,000.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1ba5c28-9ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4554201006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $220,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","Helm","YAML-based configurations","automated testing","progressive release strategies","on-call rotations"],"x-skills-preferred":["designing, operating, or scaling logging, metrics, or tracing platforms","data streaming systems for observability pipelines","automating infrastructure provisioning","OpenTelemetry for unified telemetry collection and instrumentation","exposure to modern AI workloads and GPU-based infrastructure"],"datePosted":"2026-04-18T15:51:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_60aae9e8-e8b"},"title":"Software Engineer, Observability","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>\n<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>\n<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>\n<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>\n<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_60aae9e8-e8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8400374002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed systems concepts","Data streaming pipelines","Container orchestration","Prometheus","Grafana","Datadog","OpenTelemetry","ELK Stack","Loki","ClickHouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:22.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote (Seattle, WA only)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbeabfab-916"},"title":"Software Engineer, Observability","description":"<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>\n<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>\n<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>\n<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>\n<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>\n<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbeabfab-916","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4587675006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$109,000 to $145,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","observability systems","metrics","logging","tracing"],"x-skills-preferred":["ClickHouse","Elastic","Loki","VictoriaMetrics","Prometheus","Thanos","OpenTelemetry","Grafana","Terraform","modern testing frameworks","deployment strategies","data streaming technologies","AI/ML infrastructure"],"datePosted":"2026-04-18T15:46:41.788Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":109000,"maxValue":145000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5aacaad3-05b"},"title":"Senior Machine Learning Engineer, Payments","description":"<p>Job Title: Senior Machine Learning Engineer, Payments</p>\n<p>Location: Remote-USA</p>\n<p>The Payments team at Airbnb is responsible for everything related to settling money in Airbnb&#39;s global marketplace. As a Senior Machine Learning Engineer for Payments, you will be the catalyst that transforms bold AI innovation into production systems that make Airbnb Payment experience feel effortless and secure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Spearhead LLM agents, real-time anomaly detectors, and other breakthrough solutions that solve real-world problems and create product magic.</li>\n</ul>\n<ul>\n<li>Collaborate with product, engineering, ops, and data science to spot high-leverage opportunities, refine AI/ML requirements, make principled architecture choices, and measure business value with clear, data-driven metrics.</li>\n</ul>\n<ul>\n<li>Design, train, deploy, and operate large-scale AI applications for both batch and streaming workloads, ensuring low latency, high reliability, and continuous improvement via automated monitoring and retraining loops.</li>\n</ul>\n<ul>\n<li>Mentor and inspire teammates, fostering a collaborative, experimentation-driven environment where cutting-edge research meets production excellence and every engineer is empowered to push AI boundaries at Airbnb.</li>\n</ul>\n<p>Your Expertise:</p>\n<ul>\n<li>5+ years of industry experience in applied AI/ML, inclusive MS or PhD in relevant fields.</li>\n</ul>\n<ul>\n<li>Strong programming (Python/Java) and data engineering skills.</li>\n</ul>\n<ul>\n<li>Proven mastery of modern AI/LLM workflows , prompt engineering, fine-tuning (LoRA, RLHF), hallucination mitigation, safety guardrails, and rigorous online/offline testing to minimize training/inference drift and ensure reliable outcomes.</li>\n</ul>\n<ul>\n<li>Hands-on experience with at least three of the following: PyTorch/TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms (Kubeflow, Airflow), large-scale data streaming &amp; processing (Spark, Ray, Kafka).</li>\n</ul>\n<ul>\n<li>Demonstrated success designing, deploying, and monitoring production AI systems , e.g., personalization engines, generative content services , complete with drift/cost/latency monitoring, automated retraining triggers, and cross-functional collaboration that translates ambiguous business needs into measurable AI impact.</li>\n</ul>\n<ul>\n<li>Prior knowledge of AI/ML applications in the Payments domain is highly desirable.</li>\n</ul>\n<p>Our Commitment To Inclusion &amp; Belonging:</p>\n<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively led people, and to develop the best products, services, and solutions.</p>\n<p>How We&#39;ll Take Care of You:</p>\n<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs, and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Pay Range: $191,000-$223,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5aacaad3-05b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7755758","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191,000-$223,000 USD","x-skills-required":["Python","Java","PyTorch","TensorFlow","scalable inference stacks","vector search","orchestration/MLOps platforms","large-scale data streaming & processing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:02.517Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, PyTorch, TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms, large-scale data streaming & processing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":223000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5579e8fb-227"},"title":"Senior AI Engineer","description":"<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>\n<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>\n<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>\n<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>\n<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>\n<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>\n<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>\n<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>\n<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>\n<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>\n<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>\n<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>\n<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>\n<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>\n<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>\n<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>\n<p>Build human-in-the-loop review workflows for AI decisions that require oversight , particularly for high-value financial actions.</p>\n<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>\n<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>\n<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5579e8fb-227","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Jeeves","sameAs":"https://www.jeeves.com/","logo":"https://logos.yubhub.co/jeeves.com.png"},"x-apply-url":"https://jobs.lever.co/tryjeeves/2f00206f-6091-4eed-8b5f-1325afdbfe30","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["LLM pipeline engineering","RAG architecture","ML system operation","Python programming","AI orchestration framework","ML model serving infrastructure","Observability tooling"],"x-skills-preferred":["Fintech experience","Prompt evaluation frameworks","ML lifecycle management tools","Real-time data streaming"],"datePosted":"2026-04-17T12:38:27.085Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"LLM pipeline engineering, RAG architecture, ML system operation, Python programming, AI orchestration framework, ML model serving infrastructure, Observability tooling, Fintech experience, Prompt evaluation frameworks, ML lifecycle management tools, Real-time data streaming"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1378ad18-3c9"},"title":"Data Team Leader","description":"<p>We are looking for an outstanding Data Team Leader to join our motivated engineering team. As a Data Team Leader, you will lead a dedicated group of data engineers, ensuring the successful implementation of innovative data pipelines and architectures.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead and coordinate a team of data engineers, ensuring delivery across multiple projects.</li>\n<li>Architect and implement scalable, high-performance data pipelines using Snowflake, dbt, and Airflow.</li>\n<li>Apply and guide others in using distributed systems and queueing technologies such as Celery, Redis, or equivalents.</li>\n<li>Own the end-to-end data lifecycle: ingestion, modeling, transformation, and delivery.</li>\n<li>Partner with cross-functional teams (product, analytics, DevOps) to meet business data needs.</li>\n<li>Enforce engineering guidelines, code quality, and performance standards.</li>\n<li>Conduct regular 1:1s, technical reviews, and provide mentorship to team members.</li>\n<li>Take initiative in capacity planning, hiring, and team scaling decisions.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li><p>5+ years of hands-on experience in data engineering.</p>\n</li>\n<li><p>2+ years of formal team leadership experience, including people management and project ownership.</p>\n</li>\n<li><p>Advanced knowledge of:</p>\n<ul>\n<li>Snowflake for warehousing and performance tuning.</li>\n<li>dbt for modular data modeling and testing.</li>\n<li>Apache Airflow (or similar workflow orchestrators).</li>\n<li>Distributed task and caching systems such as Celery, Redis, or similar technologies.</li>\n<li>Python, SQL, and shell scripting.</li>\n</ul>\n</li>\n<li><p>Experience with cloud platforms such as AWS, Azure, or GCP.</p>\n</li>\n<li><p>Strong grasp of software development guidelines, CI/CD, and data observability.</p>\n</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with real-time data streaming (e.g., Kafka).</li>\n<li>Familiarity with Terraform or other infrastructure-as-code tools.</li>\n<li>Prior experience in startup or high-growth environments.</li>\n<li>Exposure to BI platforms (e.g., Power BI, Looker, Tableau).</li>\n</ul>\n<p>Why Aristocrat?</p>\n<p>Aristocrat is a world leader in gaming content and technology, and a top-tier publisher of free-to-play mobile games. We deliver great performance for our B2B customers and bring joy to the lives of the millions of people who love to play our casino and mobile games.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1378ad18-3c9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Aristocrat","sameAs":"https://www.aristocrat.com/","logo":"https://logos.yubhub.co/aristocrat.com.png"},"x-apply-url":"https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/Noida-UP-IN/Data-Team-Leader_R0020618","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Snowflake","dbt","Apache Airflow","Celery","Redis","Python","SQL","shell scripting","AWS","Azure","GCP","software development guidelines","CI/CD","data observability"],"x-skills-preferred":["real-time data streaming","Terraform","infrastructure-as-code tools","BI platforms"],"datePosted":"2026-03-10T12:13:54.011Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Noida, UP, IN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Snowflake, dbt, Apache Airflow, Celery, Redis, Python, SQL, shell scripting, AWS, Azure, GCP, software development guidelines, CI/CD, data observability, real-time data streaming, Terraform, infrastructure-as-code tools, BI platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7e974f1a-211"},"title":"Sr Data Engineer","description":"<p>Join our team as a Senior Data Engineer. You&#39;ll develop and maintain data pipelines for our innovative gaming products.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design, develop, and maintain batch and streaming data pipelines, ensuring seamless data flow and integrity.</li>\n<li>Implement scalable data transformations using dbt and orchestrate workflows via Airflow or equivalent tools.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5-7 years of hands-on experience in data engineering.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7e974f1a-211","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Aristocrat","sameAs":"https://aristocrat.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/aristocrat.com.png"},"x-apply-url":"https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/Noida-UP-IN/Sr-Data-Engineer_R0019621","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","dbt","Airflow"],"x-skills-preferred":["data streaming tools","infrastructure-as-code tools","BI tools"],"datePosted":"2026-03-01T05:05:41.750Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Noida, UP, IN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, dbt, Airflow, data streaming tools, infrastructure-as-code tools, BI tools"}]}