{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/distributed-data"},"x-facet":{"type":"skill","slug":"distributed-data","display":"Distributed Data","count":41},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c2e7ae82-8ff"},"title":"Sr. Delivery Solutions Architect","description":"<p>As a Senior Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems using the Databricks Data Intelligence Platform. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary goal will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>\n<p>This is a hybrid technical and commercial role, requiring you to drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be responsible for becoming the post-sale technical lead across all Databricks products, using your skills and technical credibility to engage and communicate at all levels with an organisation.</p>\n<p>Your impact will be significant, as you will be engaged with Solutions Architects to understand the full use case demand plan for prioritised customers, lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts, and be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders</li>\n<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>\n<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover main use cases moving from &#39;win&#39; to production, enablement/user growth plan, product adoption, organic needs for current investment, executive and operational governance, and providing internal and external updates</li>\n</ul>\n<p>To succeed in this role, you will need to have 10+ years of experience in technical project or program delivery within the domain of Data and AI, with a strong understanding of solution architecture related distributed data systems, programming experience in Python, SQL, or Scala, and experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c2e7ae82-8ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8342273002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Customer-facing pre-sales","Technical architecture","Customer success","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:05.768Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Customer-facing pre-sales, Technical architecture, Customer success, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_230b25df-0f4"},"title":"Senior Software Engineer- Database Infrastructure","description":"<p>We are seeking a senior software engineer to join our Database Infrastructure team. As a member of this team, you will build and operate large-scale, reliable, and performant data systems using ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</p>\n<p>You will collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord. You will exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</p>\n<p>You will work with a talented team of engineers who have built one of the largest communication platforms in the world.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and operate large-scale, reliable, and performant data systems with ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</li>\n<li>Collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord.</li>\n<li>Exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</li>\n<li>Work with a talented team of engineers who have built one of the largest communication platforms in the world.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience with building distributed systems and datastore infrastructure.</li>\n<li>Experience with highly-available and distributed databases: e.g. ScyllaDB, Cassandra, BigTable, DynamoDB, CockroachDB, Postgres w/HA, etc.</li>\n<li>Proficiency with at least one statically-typed programming language: e.g. Rust, Go, Java, C, C++</li>\n<li>Strong operating systems, distributed systems, and concurrency control fundamentals.</li>\n<li>Familiarity with Linux internals.</li>\n<li>Comfortable working in fast-paced environments.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience with Cassandra or Scylla.</li>\n<li>Experience with Rust.</li>\n<li>Knowledge of DevOps tools like Salt, Terraform, or Kubernetes.</li>\n</ul>\n<p>Why Discord?</p>\n<p>Discord plays a uniquely important role in the future of gaming. We&#39;re a multi-platform, multi-generational, and multiplayer platform that helps people deepen their friendships around games and shared interests.</p>\n<p>We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank.</p>\n<p>Join us in our mission!</p>\n<p>Your future is just a click away!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_230b25df-0f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Discord","sameAs":"https://discord.com/","logo":"https://logos.yubhub.co/discord.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/discord/jobs/8200328002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$196,000 to $220,500 + equity + benefits","x-skills-required":["ScyllaDB","PostgreSQL","ElasticSearch","Linux","Rust","Distributed systems","Datastore infrastructure","Highly-available and distributed databases","Operating systems","Concurrency control fundamentals","Linux internals"],"x-skills-preferred":["Cassandra","Go","Java","C","C++","DevOps tools","Salt","Terraform","Kubernetes"],"datePosted":"2026-04-18T15:57:32.475Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ScyllaDB, PostgreSQL, ElasticSearch, Linux, Rust, Distributed systems, Datastore infrastructure, Highly-available and distributed databases, Operating systems, Concurrency control fundamentals, Linux internals, Cassandra, Go, Java, C, C++, DevOps tools, Salt, Terraform, Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":196000,"maxValue":220500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cba88898-896"},"title":"Research Engineer, Infrastructure, Kernels","description":"<p>We&#39;re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.</p>\n<p>This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You&#39;ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You&#39;ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.</li>\n<li>Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.</li>\n<li>Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.</li>\n<li>Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.</li>\n<li>Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.</li>\n<li>Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.</li>\n</ul>\n<p><strong>Skills and Qualifications</strong></p>\n<p>Minimum qualifications:</p>\n<ul>\n<li>Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.</li>\n<li>Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases</li>\n<li>Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</li>\n<li>Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.</li>\n<li>A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.</li>\n<li>Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.</li>\n<li>Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience training or supporting large-scale language models with tens of billions of parameters or more.</li>\n<li>Track record of improving research productivity through infrastructure design or process improvements.</li>\n<li>Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.</li>\n<li>Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.</li>\n<li>Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).</li>\n<li>Contributions to open-source GPU, ML systems, or compiler optimization projects.</li>\n<li>Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cba88898-896","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["CUDA","CuTe","Triton","GPU programming frameworks","Deep learning frameworks (e.g., PyTorch, JAX)","Computer science","Electrical engineering","Statistics","Machine learning","Physics","Robotics"],"x-skills-preferred":["Experience training or supporting large-scale language models with tens of billions of parameters or more","Track record of improving research productivity through infrastructure design or process improvements","Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators","Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks","Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM)","Contributions to open-source GPU, ML systems, or compiler optimization projects","Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure"],"datePosted":"2026-04-18T15:54:38.498Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CUDA, CuTe, Triton, GPU programming frameworks, Deep learning frameworks (e.g., PyTorch, JAX), Computer science, Electrical engineering, Statistics, Machine learning, Physics, Robotics, Experience training or supporting large-scale language models with tens of billions of parameters or more, Track record of improving research productivity through infrastructure design or process improvements, Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators, Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks, Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM), Contributions to open-source GPU, ML systems, or compiler optimization projects, Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4ebd626-2bf"},"title":"Staff+ Software Engineer, Databases","description":"<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>\n<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>\n<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive the technical direction for database solutions used across Product and Research</li>\n</ul>\n<ul>\n<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>\n</ul>\n<ul>\n<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>\n</ul>\n<ul>\n<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>\n</ul>\n<ul>\n<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>\n</ul>\n<ul>\n<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>\n</ul>\n<ul>\n<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>\n</ul>\n<ul>\n<li>Make critical build vs. buy decisions for database technologies</li>\n</ul>\n<p>You might be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>\n</ul>\n<ul>\n<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>\n</ul>\n<ul>\n<li>Have successfully scaled databases through massive growth at high-growth companies</li>\n</ul>\n<ul>\n<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>\n</ul>\n<ul>\n<li>Excel at technical leadership and cross-functional collaboration</li>\n</ul>\n<ul>\n<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>\n</ul>\n<ul>\n<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>\n</ul>\n<ul>\n<li>Experience building multi-cloud or hybrid cloud database solutions</li>\n</ul>\n<ul>\n<li>Knowledge of database orchestration and automation at scale</li>\n</ul>\n<ul>\n<li>Background at companies known for database excellence</li>\n</ul>\n<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4ebd626-2bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5151069008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["database architecture","OLTP systems","distributed database systems","database scaling","database performance optimization"],"x-skills-preferred":["PostgreSQL","MySQL","DynamoDB","Redis","Temporal","vector databases","async job processing frameworks"],"datePosted":"2026-04-18T15:53:43.551Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database architecture, OLTP systems, distributed database systems, database scaling, database performance optimization, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, vector databases, async job processing frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a966b1bf-e76"},"title":"Staff Software Engineer, Compute Infrastructure","description":"<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>\n<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>\n<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>\n<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>\n<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>\n<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>\n<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>\n<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>\n<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>\n</ul>\n<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a966b1bf-e76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4603505006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["Go","REST/gRPC","Distributed databases","Kubernetes orchestration","API development","Infrastructure services","Automation","Inventory management","Lifecycle management","CI/CD pipelines","Hardware compliance","Firmware management","Data systems","Observability","Production incident response","Continuous service improvement"],"x-skills-preferred":["Kafka","ClickHouse","CRDB","DMTF","RedFish APIs","GPU servers"],"datePosted":"2026-04-18T15:53:06.173Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d9c05c37-885"},"title":"Staff Data Analyst","description":"<p>We&#39;re looking for a Staff Data Analyst to join our Data Science team at Stripe. As a Staff Data Analyst, you will play a key role in strengthening Stripe&#39;s analytics foundation across the company.</p>\n<p>Your primary responsibility will be to lead architecture reviews and set analytical standards. You will own the strategy, technical architecture, and governance model for platforms that make key business metrics consistent, trustworthy, and easy to query at scale. You will also be responsible for owning the hardest cross-cutting analytical problems, driving org-wide data quality and consistency, and shaping long-term technical vision.</p>\n<p>To succeed in this role, you will need to have 10+ years of experience in Data Analysis, Analytics Engineering, Business Intelligence Engineering, or Data Science roles, with deep expertise in SQL and familiarity with AI-assisted development tools. You will also need to have a proven track record of setting long-term technical vision for analytics platforms and experience leading cross-team or org-wide data initiatives.</p>\n<p>As a Staff Data Analyst at Stripe, you will have the opportunity to work with a talented team of data analysts, data scientists, and engineers to build and maintain a robust analytics infrastructure that supports business growth and decision-making.</p>\n<p>If you are a motivated and experienced data professional looking for a challenging role with a high-growth company, we encourage you to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d9c05c37-885","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7801457","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","AI-assisted development tools","Data analysis","Analytics engineering","Business intelligence engineering","Data science"],"x-skills-preferred":["Warehouse design","Metrics infrastructure","Performance optimization","Distributed data frameworks","Semantic layer or metrics layer"],"datePosted":"2026-04-18T15:51:08.035Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, AI-assisted development tools, Data analysis, Analytics engineering, Business intelligence engineering, Data science, Warehouse design, Metrics infrastructure, Performance optimization, Distributed data frameworks, Semantic layer or metrics layer"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0c6077e7-8e1"},"title":"Staff Applied AI Engineer","description":"<p>We are seeking a Staff Applied AI Engineer to join our team at Komodo Health. As a Staff Applied AI Engineer, you will be a cross-functional AI leader and strategic thought partner. This role exists to define Komodo&#39;s long-term AI capabilities, set company-wide technical standards, architect foundational AI systems, and guide teams toward scalable, safe, and innovative AI development.</p>\n<p>You will influence data strategy, drive build-vs-buy evaluations, and meaningfully shift Komodo&#39;s AI-native infrastructure and culture. Your responsibilities will include:</p>\n<ul>\n<li>Helping design company-wide AI vision, standards, and reference architectures.</li>\n<li>Defining and building foundational AI platforms (e.g., internal agent frameworks, orchestration systems).</li>\n<li>Acting as a multiplier by mentoring teams, running workshops, and driving organizational knowledge sharing.</li>\n<li>Making high-level technical decisions, including evaluating major build-vs-buy choices for platforms and tooling.</li>\n<li>Shaping Komodo&#39;s data strategy from an AI perspective,requirements, quality, orientation, and long-term structure.</li>\n<li>Leading complex applied research initiatives that push Komodo into new AI capability frontiers.</li>\n<li>Ensuring Komodo&#39;s AI systems meet high bars for reliability, accountability, ethics, and transparency.</li>\n</ul>\n<p>The ideal candidate will be a recognized expert in applied AI with demonstrated impact across multiple teams or organizational domains. They will have extensive experience architecting end-to-end AI systems, multi-agent architectures, and large-scale orchestration frameworks. They will also have strong fluency in Python, GenAI frameworks (vLLM, Strands, Crew AI), and full-stack system integration.</p>\n<p>We offer a competitive salary range of $274,000-$322,000 USD per year, depending on location. This role may be eligible for performance-based bonuses and equity awards. We also offer comprehensive health, dental, and vision insurance, flexible time off and holidays, 401(k) with company match, disability insurance and life insurance, and leaves of absence in accordance with applicable state and local laws and regulations and company policy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0c6077e7-8e1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Komodo Health","sameAs":"https://www.komodohealth.com/","logo":"https://logos.yubhub.co/komodohealth.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/komodohealth/jobs/8512187002","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$274,000-$322,000 USD","x-skills-required":["applied AI","data strategy","build-vs-buy evaluations","AI-native infrastructure","data orchestration","Python","GenAI frameworks","full-stack system integration"],"x-skills-preferred":["deep healthcare data","healthcare system expertise","large-scale distributed data and compute systems"],"datePosted":"2026-04-18T15:50:57.542Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"applied AI, data strategy, build-vs-buy evaluations, AI-native infrastructure, data orchestration, Python, GenAI frameworks, full-stack system integration, deep healthcare data, healthcare system expertise, large-scale distributed data and compute systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":274000,"maxValue":322000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b054d891-685"},"title":"Staff+ Software Engineer, Databases","description":"<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>\n<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>\n<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive the technical direction for database solutions used across Product and Research</li>\n<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>\n<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>\n<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>\n<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>\n<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>\n<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>\n<li>Make critical build vs. buy decisions for database technologies</li>\n</ul>\n<p>You might be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>\n<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>\n<li>Have successfully scaled databases through massive growth at high-growth companies</li>\n<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>\n<li>Excel at technical leadership and cross-functional collaboration</li>\n<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>\n<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>\n<li>Experience building multi-cloud or hybrid cloud database solutions</li>\n<li>Knowledge of database orchestration and automation at scale</li>\n<li>Background at companies known for database excellence</li>\n</ul>\n<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>\n<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b054d891-685","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5151069008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["Database architecture","Distributed database systems","OLTP systems","Database scaling","Database performance optimization","Database reliability","Database cost efficiency"],"x-skills-preferred":["PostgreSQL","MySQL","DynamoDB","Redis","Temporal","Vector databases","Async job processing frameworks","Multi-cloud database solutions","Hybrid cloud database solutions","Database orchestration","Database automation"],"datePosted":"2026-04-18T15:50:18.715Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Database architecture, Distributed database systems, OLTP systems, Database scaling, Database performance optimization, Database reliability, Database cost efficiency, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, Vector databases, Async job processing frameworks, Multi-cloud database solutions, Hybrid cloud database solutions, Database orchestration, Database automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e56cfd84-be6"},"title":"Delivery Solutions Architect - Financial Services","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. We are seeking a Delivery Solutions Architect to play an important role in this journey.</p>\n<p>As a Delivery Solutions Architect, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from ‘win’ to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>\n<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization)</li>\n<li>Executive and operational governance</li>\n<li>Provide internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>7+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Local Pay Range $180,000-$247,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e56cfd84-be6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8442976002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Business value and outcomes","Technical program management","Project management","Account management","Stakeholder management","Resource management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:21.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Georgia; Illinois; Massachusetts; New York; North Carolina; Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Business value and outcomes, Technical program management, Project management, Account management, Stakeholder management, Resource management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_46628f21-1ce"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>\n<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from ‘win’ to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>\n<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization)</li>\n<li>Executive and operational governance</li>\n<li>Provide internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_46628f21-1ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8476496002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Distributed data systems","Solution architecture","Technical project management","Customer success","Pre-sales","Technical architecture","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:56.910Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brisbane, Australia; Melbourne, Australia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Distributed data systems, Solution architecture, Technical project management, Customer success, Pre-sales, Technical architecture, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b7b8d06f-881"},"title":"Backend Engineer, Knowledge Graph (Rust)","description":"<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>\n<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab’s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>\n<p>In this role, you’ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you’ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You’ll help reduce single points of failure with better tests and runbooks, and you’ll help the team ship analytical services that are easier to maintain and evolve over time.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Implement and iterate on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</li>\n</ul>\n<ul>\n<li>Help maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</li>\n</ul>\n<ul>\n<li>Contribute to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</li>\n</ul>\n<ul>\n<li>Improve the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</li>\n</ul>\n<ul>\n<li>Collaborate asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</li>\n</ul>\n<ul>\n<li>Use AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and share what works with the team while keeping a strong focus on code quality and correctness.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</li>\n</ul>\n<ul>\n<li>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</li>\n</ul>\n<ul>\n<li>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</li>\n</ul>\n<ul>\n<li>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</li>\n</ul>\n<ul>\n<li>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</li>\n</ul>\n<ul>\n<li>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</li>\n</ul>\n<ul>\n<li>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</li>\n</ul>\n<ul>\n<li>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</li>\n</ul>\n<ul>\n<li>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</li>\n</ul>\n<ul>\n<li>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</li>\n</ul>\n<p>About the team:</p>\n<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b7b8d06f-881","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8437754002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$98,000-$210,000 USD","x-skills-required":["Rust","backend systems","reliability","maintainability","distributed data","analytics systems","graph data modeling","query patterns","AI tools","system design","low-process","high-ownership"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:47.362Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, Canada; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, backend systems, reliability, maintainability, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":98000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_688649c6-0a0"},"title":"Delivery Solutions Architect - Digital Native Business","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems. As a Delivery Solutions Architect (DSA), you will play an important role in this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from &#39;win&#39; to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>\n<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>\n<li>Executive and operational governance</li>\n<li>Provide internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $180,000-$247,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_688649c6-0a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8385234002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Project management","Account management","Stakeholder management","Resource management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:13.581Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Project management, Account management, Stakeholder management, Resource management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1e2803b1-820"},"title":"Delivery Solutions Architect - Communications, Media, Entertainment and Games","description":"<p>We are seeking a Delivery Solutions Architect to join our team. As a Delivery Solutions Architect, you will play a key role in accelerating the adoption and growth of the Databricks platform in our customers. You will collaborate with our sales and field engineering teams to ensure customer success by increasing focus and technical accountability to our most complex customers. You will be responsible for leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts. You will be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders. You will create, own, and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services resources on the delivery of PS Engagement proposals. You will navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs. You will develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover main use cases moving from &#39;win&#39; to production, enablement/user growth plan, product adoption, organic needs for current investment, executive and operational governance, and KPI reporting on the status of usage and customer health.</p>\n<p>To be successful in this role, you will need to have 5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI. You will need to have programming experience in Python, SQL, or Scala. You will need to have experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role. You will need to have understanding of solution architecture related distributed data systems. You will need to have understanding of how to attribute business value and outcomes to specific project deliverables. You will need to have technical program or project management including account, stakeholder, and resource management accountability. You will need to have experience resolving complex and important escalation with senior customer executives. You will need to have experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects. You will need to have a track record of overachievement against quota, goals, or similar objective targets. You will need to have a Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1e2803b1-820","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8457249002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$219,100-$301,300 USD","x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Technical program management","Project management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:51.533Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - California"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Technical program management, Project management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":219100,"maxValue":301300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_30648b64-012"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilising the Intelligence platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>\n<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. As a DSA, you will help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>\n<p>This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation. You will report directly to a Field Engineering Director in Japan.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritised customers.</li>\n<li>Lead the Post-Technical Win technical account strategy and execution plan for the majority of Databricks Use Cases within our most strategic accounts.</li>\n<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>\n<li>You will be the first contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organisations.</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise.</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services resources on the delivery of PS Engagement proposals.</li>\n<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs.</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from ‘win’ to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>\n<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimisation</li>\n<li>Executive and operational governance</li>\n<li>Provide internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression - to your Technical GM.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>8+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with our customers.</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>A understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_30648b64-012","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8450551002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Project management","Technical program management","Customer success","Pre-sales","Technical architecture","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:48.653Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Project management, Technical program management, Customer success, Pre-sales, Technical architecture, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0c456364-565"},"title":"Delivery Solutions Architect","description":"<p>As a Delivery Solutions Architect at Databricks, you will be a trusted technical advisor embedded within the customer organisation. You will work closely with sales and field engineering to accelerate adoption and growth of the Databricks platformقت You will ensure customer success by providing technical accountability for our most complex customers,helping them maximise the value of Databricks workloads they have already selected and improving their return on investment.</p>\n<p>This role blends deep technical leadership with strategic customer engagement. You will own the post-sales technical strategy for the customer’s highest-value use cases and serve as their primary advisor across the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Being the accountable Databricks Architect for your assigned customers, working with technical teams to guide priority use cases from design through go-live,removing blockers, providing best practices, and ensuring stable, scalable adoption.</li>\n<li>Leading the post-technical-win strategy and execution plan for major Databricks use cases, aligning with Solutions Architects to understand full demand plans and drive clarity across multiple selling teams and stakeholders.</li>\n<li>Owning the technical leadership of assigned use cases, creating certainty from ambiguity and coordinating onboarding, enablement, success, go-live, and healthy consumption of workloads selected for Databricks.</li>\n<li>Serving as the first point of contact for production/go-live status, often across multiple complex use cases within large enterprise organisations.</li>\n<li>Orchestrating the broader Databricks ecosystem,Shared Services, User Education, Onboarding/Technical Services, Support, and specialist technical teams,to ensure high-quality delivery and escalate advanced issues when needed.</li>\n<li>Creating and executing a point of view for accelerating use cases into production, collaborating with Professional Services on proposals as needed.</li>\n<li>Partnering with Product and Engineering to introduce new capabilities, private previews, and upgrade paths that support customer roadmaps.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Programming experience in Python, SQL, or Scala, and a solid understanding of distributed data systems.</li>\n<li>5+ years of experience delivering Data, Analytics, or AI projects, with the ability to contribute to architectural discussions with customers.</li>\n<li>Experience in customer-facing technical roles such as technical architecture, pre-sales, consulting, or customer success.</li>\n<li>Ability to guide architectural decisions in domains such as data engineering, data architecture, data warehousing, or data science.</li>\n<li>Demonstrated ability to drive delivery outcomes without hands-on keyboard responsibilities.</li>\n<li>Experience resolving complex escalations with senior customer stakeholders.</li>\n<li>Understanding of how to connect technical deliverables to business value.</li>\n<li>Track record of achieving or exceeding goals or objectives.</li>\n<li>Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent experience.</li>\n<li>Fluency in English is required; French or German language skills are a plus.</li>\n<li>Ability to travel up to 30%.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0c456364-565","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8309177002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Distributed data systems","Data engineering","Data architecture","Data warehousing","Data science"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:12.698Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, Switzerland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Distributed data systems, Data engineering, Data architecture, Data warehousing, Data science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c94fae85-14c"},"title":"Delivery Solutions Architect - Digital Native Business","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform.</p>\n<p>As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in Digital Native customers.</p>\n<p>You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon.</p>\n<p>This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>\n<p>You will report directly to a DSA Manager within the Field Engineering organization.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritized customers</li>\n</ul>\n<ul>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n</ul>\n<ul>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n</ul>\n<ul>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>\n</ul>\n<ul>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n</ul>\n<ul>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n</ul>\n<ul>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n</ul>\n<ul>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from &#39;win&#39; to production</li>\n</ul>\n<ul>\n<li>Enablement / user growth plan</li>\n</ul>\n<ul>\n<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>\n</ul>\n<ul>\n<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>\n</ul>\n<ul>\n<li>Executive and operational governance</li>\n</ul>\n<ul>\n<li>Provide internal and external updates</li>\n</ul>\n<ul>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n</ul>\n<ul>\n<li>Programming experience in Python, SQL or Scala</li>\n</ul>\n<ul>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n</ul>\n<ul>\n<li>Understanding of solution architecture related distributed data systems</li>\n</ul>\n<ul>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n</ul>\n<ul>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n</ul>\n<ul>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n</ul>\n<ul>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n</ul>\n<ul>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n</ul>\n<p>Can travel up to 30% when needed</p>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $180,000-$247,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c94fae85-14c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8385230002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Technical project management","Customer success","Pre-sales","Technical architecture","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:02.287Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Technical project management, Customer success, Pre-sales, Technical architecture, Consulting","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcccc99d-20f"},"title":"Delivery Solutions Architect - Public Sector","description":"<p>As a Delivery Solutions Architect, you will play a key role in accelerating the adoption and growth of the Databricks Platform in public sector customers. You will collaborate with sales and field engineering teams to drive growth in assigned customers and use cases. This is a hybrid technical and commercial role that requires you to utilize your skills and technical credibility to engage and communicate effectively with all levels of an organization.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging with solutions architects to understand full use case demand plans for prioritized customers</li>\n<li>Leading post-technical win technical account strategy and execution plans for Databricks use cases within strategic accounts</li>\n<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders</li>\n<li>Creating, owning, and executing a point-of-view on how key use cases can be accelerated into production</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>U.S. citizenship</li>\n<li>7+ years of experience in technical project/program delivery within the domain of data and AI</li>\n<li>Programming experience in Python, SQL, or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related to distributed data systems</li>\n</ul>\n<p>Pay range transparency: $180,000-$247,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcccc99d-20f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8289852002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Python","SQL","Scala","Data and AI","Solution architecture","Distributed data systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:50.959Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New Jersey; Remote - New York; Remote - Pennsylvania; Remote - Washington D.C."}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0abf66ee-ccb"},"title":"Delivery Solutions Architect - Healthcare & Life Sciences","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from &#39;win&#39; to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>\n<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>\n<li>Executive and operational governance</li>\n<li>Provide internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0abf66ee-ccb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8233904002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Python","SQL","Scala","Data and AI","Solution architecture","Distributed data systems","Business value attribution","Project management","Customer success","Technical architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:36.356Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Project management, Customer success, Technical architecture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d91bec16-126"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>\n<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>\n<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>\n<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>\n<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<p>Main use cases moving from ‘win’ to production Enablement / user growth plan Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision) Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization) Executive and operational governance Provide internal and external updates KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</p>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program, or project management including account, stakeholder and resource management accountability</li>\n<li>Experience resolving complex and important escalation with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>\n<li>Track record of overachievement against quota, Goals or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d91bec16-126","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8285292002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Data and AI","Solution architecture","Distributed data systems","Technical project management","Customer success","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:29.615Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Auckland, New Zealand"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Technical project management, Customer success, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4eeea81a-54e"},"title":"Delivery Solutions Architect","description":"<p>As a Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary responsibility will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>\n<p>This is a hybrid technical and commercial role that requires you to drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be responsible for becoming the post-sale technical lead across all Databricks products and using your skills and technical credibility to engage and communicate at all levels with an organization.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritized customers</li>\n<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live, and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Being the first contact for any technical issues or questions related to production/go live status of agreed-upon use cases within an account, oftentimes servicing multiple use cases within the largest and most complex organizations</li>\n<li>Leveraging both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>\n<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<ul>\n<li>Main use cases moving from &#39;win&#39; to production</li>\n<li>Enablement/user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>\n<li>Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization)</li>\n<li>Executive and operational governance</li>\n<li>Providing internal and external updates</li>\n<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression to your Technical GM</li>\n</ul>\n<p>Key qualifications include:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL, or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program or project management, including account, stakeholder, and resource management accountability</li>\n<li>Experience resolving complex and important escalations with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects</li>\n<li>Track record of overachievement against quota, goals, or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4eeea81a-54e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8137000002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Data and AI","Solution architecture","Distributed data systems","Business value attribution","Technical program management","Customer success","Technical architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:15.062Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Technical program management, Customer success, Technical architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe828503-8d1"},"title":"Senior Delivery Solutions Architect","description":"<p>We are seeking a Senior Delivery Solutions Architect to join our Field Engineering team in Paris. As a Senior Delivery Solutions Architect, you will be a trusted technical advisor to key customers, providing expert guidance that translates data, analytics, and AI challenges into high-impact business value.</p>\n<p>You will help design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and customer enablement. Internally, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing secure, scalable architecture</li>\n<li>Aligning people, processes, and technology</li>\n<li>Establishing trusted advisor relationships</li>\n<li>Leveraging the broader ecosystem of Databricks experts</li>\n</ul>\n<p>This is a hybrid technical and commercial role. Technically, the expectations are that you become the post-sales technical lead and trusted advisor across all Databricks products for the customer&#39;s top priority use cases. This requires you to use your technical skills and credibility to engage and communicate with technical/technical leadership stakeholders in our customer organizations, do architecture reviews, help with performance and cost optimizations, demonstrate new capabilities, remove blockers, etc.</p>\n<p>In parallel, it is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving onboarding plans.</p>\n<p>While not a hands-on-keyboard role, this is a highly technical position where architectural skills in fields such as Data Architecture, Data Engineering, Data Warehousing, or Data Science are essential.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe828503-8d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8298587002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Programming experience in PySpark, SQL, or Scala","Understanding and hands-on experience of solution architecture-related distributed data and analytics systems","10+ years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI","Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles","Understanding of how to attribute business value and outcomes to specific project deliverables"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:11.017Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Programming experience in PySpark, SQL, or Scala, Understanding and hands-on experience of solution architecture-related distributed data and analytics systems, 10+ years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI, Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles, Understanding of how to attribute business value and outcomes to specific project deliverables"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_726e518c-28f"},"title":"デリバリーソリューションアーキテクト","description":"<p>Job Title: Delivery Solution Architect</p>\n<p>We are seeking a highly skilled Delivery Solution Architect to join our team. As a Delivery Solution Architect, you will be responsible for delivering technical solutions to customers and collaborating with sales and field engineering teams to accelerate customer adoption of our platform.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Collaborate with sales and field engineering teams to deliver technical solutions to customers.</li>\n<li>Provide technical guidance and support to customers to ensure they get the maximum value and ROI from our platform.</li>\n<li>Work closely with customers to understand their business requirements and develop tailored solutions to meet their needs.</li>\n<li>Develop and maintain relationships with key stakeholders, including customers, partners, and internal teams.</li>\n<li>Collaborate with cross-functional teams to identify and prioritize customer needs and develop solutions to address them.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>8+ years of experience in delivering technical projects or programs in the data and AI space.</li>\n<li>Strong understanding of distributed data systems and solution architecture.</li>\n<li>Experience working with customers to deliver technical solutions and providing technical guidance and support.</li>\n<li>Strong communication and interpersonal skills, with the ability to work effectively with customers, partners, and internal teams.</li>\n<li>Experience working in a fast-paced environment and prioritizing multiple tasks and deadlines.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>\n<li>Opportunity to work with a leading-edge technology company and contribute to the development of innovative solutions.</li>\n<li>Collaborative and dynamic work environment with a team of experienced professionals.</li>\n<li>Professional development opportunities, including training and education programs.</li>\n</ul>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_726e518c-28f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8428882002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Distributed data systems","Solution architecture","Customer-facing technical solutions","Technical guidance and support"],"x-skills-preferred":["Cloud computing","Data engineering","Machine learning","Data science"],"datePosted":"2026-04-18T15:45:05.863Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Distributed data systems, Solution architecture, Customer-facing technical solutions, Technical guidance and support, Cloud computing, Data engineering, Machine learning, Data science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c9da749d-250"},"title":"Delivery Solutions Architect","description":"<p>As a Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems using the Databricks Data Intelligence Platform. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary responsibility will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>\n<p>This is a hybrid technical and commercial role. You will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be the post-sale technical lead across all Databricks products, requiring you to use your skills and technical credibility to engage and communicate at all levels with an organization.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritized customers</li>\n<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live, and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>\n<li>Being the first contact for any technical issues or questions related to production/go live status of agreed-upon use cases within an account, often services multiple use cases within the largest and most complex organizations</li>\n<li>Leveraging both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>\n<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>\n<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>\n</ul>\n<p>Main use cases moving from &#39;win&#39; to production \tEnablement/user growth plan \tProduct adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision) \tOrganic needs for current investment (e.g., cloud cost control, tuning &amp; optimization) \tExecutive and operational governance \tProvide internal and external updates \tKPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression to your Technical GM</p>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>\n<li>Programming experience in Python, SQL, or Scala</li>\n<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>\n<li>Understanding of solution architecture related distributed data systems</li>\n<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>\n<li>Technical program or project management, including account, stakeholder, and resource management accountability</li>\n<li>Experience resolving complex and important escalations with senior customer executives</li>\n<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects</li>\n<li>Track record of overachievement against quota, goals, or similar objective targets</li>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c9da749d-250","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8465963002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Data and AI","Solution architecture","Distributed data systems","Business value attribution","Technical program management","Customer success","Pre-sales","Technical architecture","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:27.901Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Technical program management, Customer success, Pre-sales, Technical architecture, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fd64db3e-49f"},"title":"Staff Software Engineer – Customer Experience Intelligence (CXI)","description":"<p>At Databricks, we&#39;re shaping the future of how customers experience support at scale. As the Staff Technical Lead for Customer Experience Intelligence, you&#39;ll design intelligent, AI-powered systems that make support faster, smarter, and more effortless.</p>\n<p>In this role, you&#39;ll have end-to-end ownership of the architecture and technical strategy behind automation and agentic workflows that reduce mean time to mitigate (MTTM), boost quality, and enable our Support organization to scale impact without scaling headcount. You&#39;ll work hands-on with teams across Support, Product, and Platform Engineering to build seamless systems that anticipate customer needs before they arise.</p>\n<p>You&#39;ll lead the technical foundation that transforms how customers experience support , where issues are auto-diagnosed, solutions are delivered instantly, and engineers focus their time on the toughest challenges. Your success will mean customers moving faster, trusting Databricks deeper, and feeling the impact of your systems every day.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Owning the technical vision and architecture for Databricks&#39; Support Automation and Tooling ecosystem</li>\n<li>Leading hands-on development of automation to improve customer experience and Support scalability</li>\n<li>Driving rapid, iterative development while upholding quality, safety, and reliability standards</li>\n<li>Designing agentic workflows that evolve from human-in-the-loop to fully automated systems</li>\n<li>Implementing observability, transparency, and rollback mechanisms for AI-driven decisions</li>\n<li>Acting as the primary technical interface between Support, Product, and Platform Engineering to align technical roadmaps and unblock dependencies</li>\n<li>Setting a high engineering bar for quality, reliability, and maintainability in line with Databricks standards</li>\n<li>Mentoring engineers and SMEs across Software and Support Engineering functions</li>\n</ul>\n<p>We&#39;re looking for someone with:</p>\n<ul>\n<li>A BS or higher degree in Computer Science or a related field</li>\n<li>Technical leadership experience in large projects similar to those described, including automation tooling, distributed systems, and APIs</li>\n<li>Extensive full-stack development experience</li>\n<li>Proven success designing and deploying production-grade automation in complex technical environments</li>\n<li>Hands-on experience with ML-assisted systems, decision support, or agentic automation</li>\n<li>Deep familiarity with distributed data platforms, developer tooling, and large-scale infrastructure systems</li>\n<li>Understanding of multi-cloud environments (AWS, Azure, GCP), compliance, and security constraints</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $190,000-$261,250 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fd64db3e-49f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8416959002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$190,000-$261,250 USD","x-skills-required":["Automation tooling","Distributed systems","APIs","Full-stack development","ML-assisted systems","Decision support","Agentic automation","Distributed data platforms","Developer tooling","Large-scale infrastructure systems","Multi-cloud environments","Compliance","Security constraints"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:19.005Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Automation tooling, Distributed systems, APIs, Full-stack development, ML-assisted systems, Decision support, Agentic automation, Distributed data platforms, Developer tooling, Large-scale infrastructure systems, Multi-cloud environments, Compliance, Security constraints","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":261250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2fe8215c-605"},"title":"Senior Software Engineer, Storage Infrastructure","description":"<p>About Us</p>\n<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>Emerging Technologies &amp; Incubation (ETI)</p>\n<p>ETI is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers. Cloudflare&#39;s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>\n<p>About the Team</p>\n<p>ETI&#39;s Storage Infrastructure team is responsible for the core storage layer that underpins many of ETI&#39;s stateful services. Our scope ranges from managing the physical hardware to operating the distributed databases and storage systems built upon it. We run this infrastructure globally across Cloudflare&#39;s network, which presents unique and complex engineering puzzles. We navigate efficiently expanding storage capacity, optimizing rebuild operations, and coordinating operations across failure domains to uphold durability.</p>\n<p>While other service teams focus on product development, our mission is to ensure the underlying storage is reliable, performant, and scalable. You&#39;ll be joining a highly motivated team that is building the next generation of distributed storage services.</p>\n<p>Responsibilities</p>\n<p>In this role, you will help build and operate the next generation of globally distributed storage systems. You will own your code from inception to release, delivering solutions at all layers of the stack. On any given day, you might write a design document for a new provisioning system, model failure domain dependencies across edge locations, benchmark new storage hardware, build standardized observability and runbooks for distributed database clusters, or automate operational toil through purpose-built tooling and intelligent automation.</p>\n<p>You can expect to interact with a variety of languages and technologies including Rust, Go, Saltstack, and Terraform.</p>\n<p>Examples of desirable skills, knowledge, and experience</p>\n<ul>\n<li>Strong programming skills with languages like Rust, Go, or Python</li>\n<li>A solid understanding of distributed systems concepts such as consistency, consensus, data replication, fault tolerance, and partition tolerance</li>\n<li>Experience with distributed databases and storage systems</li>\n<li>Experience with infrastructure configuration tooling and infrastructure as code</li>\n<li>Familiarity with storage fundamentals: block devices, filesystems, SSD characteristics</li>\n<li>Experience building and maintaining high-throughput, low-latency systems</li>\n<li>Understanding of network fundamentals as they relate to distributed storage -- bandwidth constraints, latency tradeoffs, cross-datacenter replication</li>\n<li>Strong written and verbal communication skills and ability to explain technical decisions clearly</li>\n<li>Comfortable operating in fast-paced environments with tight deadlines and evolving priorities</li>\n</ul>\n<p>Benefits</p>\n<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>\n<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>\n<p>Health &amp; Welfare Benefits</p>\n<ul>\n<li>Medical/Rx Insurance</li>\n<li>Dental Insurance</li>\n<li>Vision Insurance</li>\n<li>Flexible Spending Accounts</li>\n<li>Commuter Spending Accounts</li>\n<li>Fertility &amp; Family Forming Benefits</li>\n<li>On-demand mental health support and Employee Assistance Program</li>\n<li>Global Travel Medical Insurance</li>\n</ul>\n<p>Financial Benefits</p>\n<ul>\n<li>Short and Long Term Disability Insurance</li>\n<li>Life &amp; Accident Insurance</li>\n<li>401(k) Retirement Savings Plan</li>\n<li>Employee Stock Participation Plan</li>\n</ul>\n<p>Time Off</p>\n<ul>\n<li>Flexible paid time off covering vacation and sick leave</li>\n<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo</p>\n<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>\n<p>Athenian Project</p>\n<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1</p>\n<p>We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2fe8215c-605","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7629805","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","Go","Python","Distributed systems","Consistency","Consensus","Data replication","Fault tolerance","Partition tolerance","Distributed databases","Storage systems","Infrastructure configuration tooling","Infrastructure as code","Storage fundamentals","Block devices","Filesystems","SSD characteristics","High-throughput systems","Low-latency systems","Network fundamentals","Bandwidth constraints","Latency tradeoffs","Cross-datacenter replication"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:33.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Go, Python, Distributed systems, Consistency, Consensus, Data replication, Fault tolerance, Partition tolerance, Distributed databases, Storage systems, Infrastructure configuration tooling, Infrastructure as code, Storage fundamentals, Block devices, Filesystems, SSD characteristics, High-throughput systems, Low-latency systems, Network fundamentals, Bandwidth constraints, Latency tradeoffs, Cross-datacenter replication"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e7491b84-e4f"},"title":"Backend Engineer, Knowledge Graph (Rust)","description":"<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>\n<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab&#39;s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>\n<p>In this role, you&#39;ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you&#39;ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You&#39;ll help reduce single points of failure with better tests and runbooks, and you&#39;ll help the team ship analytical services that are easier to maintain and evolve over time.</p>\n<p>Key responsibilities include:</p>\n<p>Implementing and iterating on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</p>\n<p>Helping maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</p>\n<p>Contributing to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</p>\n<p>Improving the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</p>\n<p>Collaborating asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</p>\n<p>Using AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and sharing what works with the team while keeping a strong focus on code quality and correctness.</p>\n<p>Participating in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</p>\n<p>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</p>\n<p>What you&#39;ll bring:</p>\n<p>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</p>\n<p>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</p>\n<p>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</p>\n<p>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</p>\n<p>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</p>\n<p>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</p>\n<p>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</p>\n<p>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</p>\n<p>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</p>\n<p>About the team:</p>\n<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>\n<p>How GitLab Supports Full-Time Employees:</p>\n<p>Benefits to support your health, finances, and well-being Flexible Paid Time Off Team Member Resource Groups Equity Compensation &amp; Employee Stock Purchase Plan Growth and Development Fund Parental leave Home office support</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e7491b84-e4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8481958002","x-work-arrangement":"remote","x-experience-level":"intermediate","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","backend systems","distributed data","analytics systems","graph data modeling","query patterns","AI tools","system design","low-process","high-ownership environment"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:48.392Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, backend systems, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership environment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_faa865dc-a1d"},"title":"Senior Data Engineer, BizTech","description":"<p>We&#39;re seeking a hands-on expert to provide technical leadership in addressing BizTech&#39;s diverse data engineering needs and driving long-term strategies and best practices.</p>\n<p>As a Senior Data Engineer, you&#39;ll lead the design, implementation, and testing of data systems, from architecture to production. You&#39;ll build batch and real-time data systems that support business needs and critical products, ensuring data systems&#39; quality, performance, and stability through rigorous monitoring and quality assurance practices.</p>\n<p>You&#39;ll collaborate with cross-functional teams, including product managers, data scientists, and engineers, to develop scalable systems and drive data-driven decisions. You&#39;ll maintain strong partnerships with backend, data science, and machine learning teams to ensure seamless integration of data systems.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the design, implementation, and testing of data systems, from architecture to production</li>\n<li>Building batch and real-time data systems that support business needs and critical products</li>\n<li>Ensuring data systems&#39; quality, performance, and stability through rigorous monitoring and quality assurance practices</li>\n<li>Collaborating with cross-functional teams to develop scalable systems and drive data-driven decisions</li>\n<li>Maintaining strong partnerships with backend, data science, and machine learning teams to ensure seamless integration of data systems</li>\n</ul>\n<p>We&#39;re looking for someone with 9+ years of relevant experience, a Bachelor&#39;s/Master&#39;s degree in CS/EE, and extensive experience in designing, building, and operating distributed data platforms. You should be proficient in Java, Scala, or Python, with strong skills in data processing and SQL querying. Proven track record of designing and optimizing batch and real-time data pipelines is a must.</p>\n<p>In addition to technical expertise, we&#39;re looking for someone with excellent written and verbal communication skills, with the ability to influence stakeholders and convey complex technical concepts. You should be a strong leader and mentor, with experience guiding teams on best practices and technical strategies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_faa865dc-a1d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7640881","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","Python","data processing","SQL querying","distributed data platforms","batch and real-time data pipelines"],"x-skills-preferred":["machine learning","data science","backend development"],"datePosted":"2026-04-18T15:40:41.162Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Python, data processing, SQL querying, distributed data platforms, batch and real-time data pipelines, machine learning, data science, backend development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58945b73-e53"},"title":"Director, Customer Engineering","description":"<p>We are on the hunt for a Director of Customer Engineering to steer the direction for our customer architect team across multiple territories.</p>\n<p>At Elastic, our solutions are tailored to provide Observability, Cybersecurity, and Search AI, and we need a leader who is technically adept and can drive our teams to ensure our customers derive the most value from our offerings.</p>\n<p>In this leadership position, the ideal candidate is a unique blend of a people-focused, business-driven, and technically-savvy leader. You will directly lead all aspects of a high performing team of Customer Architects.</p>\n<p>Having an intimate understanding of our technical solutions is crucial to empathize with your team&#39;s challenges and derive strategic solutions.</p>\n<p>The Director of Customer Engineering plays a pivotal role in representing Elastic at the forefront.</p>\n<p>Given the intrinsic technical nature of our offerings, it&#39;s crucial for the Director to be proficient in the intricacies of our solutions, ensuring they can effectively communicate their value and drive consumption.</p>\n<p>This role requires frequent face-to-face engagements, making it essential for the candidate to excel at in-person presentations, discussions, and building strong relationships rooted in technical trust.</p>\n<p>With the need to partner with a wide array of customers, travel becomes inherent to this role. Candidates should be primed for 25% travel, geared towards promoting and amplifying the consumption of our Elastic solutions across the Americas.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Leadership: Lead and mentor Customer Architects, encouraging a culture of collaboration, innovation, and customer-centricity.</li>\n</ul>\n<ul>\n<li>Technical Acumen: Serve as the technical pillar of leadership, ensuring that the region&#39;s strategies align with the technical nuances of our solutions. A background in sales engineering leadership is beneficial.</li>\n</ul>\n<ul>\n<li>Strategic Planning: Employ both urgent and strategic approaches to address critical situations, instilling processes that address challenges at scale. Ensure that teams in the region are geared towards helping customers realize value and driving consumption.</li>\n</ul>\n<ul>\n<li>Market Knowledge: Leverage your experience with distributed datastores, and areas such as observability, cybersecurity, or search to bring depth to the team&#39;s understanding of the market. Experience in Vectors, Machine Learning, and GenAI will be an added advantage.</li>\n</ul>\n<ul>\n<li>Collaboration: Work in tandem with regional sales leaders, account teams, and other stakeholders to ensure seamless communication and alignment of objectives.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Information Systems, or a related field highly preferred.</li>\n</ul>\n<ul>\n<li>Proven experience leading teams centered on consumption.</li>\n</ul>\n<ul>\n<li>Technical expertise in distributed datastores, and areas like observability, cybersecurity, or search. Familiarity with Elastic solutions will be a significant advantage.</li>\n</ul>\n<ul>\n<li>Familiarity or experience in the realms of Vectors, Machine Learning, and GenAI.</li>\n</ul>\n<ul>\n<li>Strong interpersonal and communication skills, with the ability to articulate complex technical concepts to a varied audience.</li>\n</ul>\n<ul>\n<li>Ability to balance urgency with strategic planning, ensuring teams are always aligned with company and customer objectives.</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>Compensation for this role is in the form of base salary plus a variable component, that together comprise the On-Target Earnings (OTE).</p>\n<p>On-Target Earnings (OTE) are based on a 70/30 pay mix (base salary / target variable).</p>\n<p>The typical starting OTE range for new hires in this role is listed below. This range represents the lowest to highest OTE we reasonably and in good faith believe we would pay for this role at the time of this posting.</p>\n<p>We may ultimately pay more or less than the posted range, and the range may be modified in the future.</p>\n<p>An employee&#39;s position within the OTE range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>\n<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders.</p>\n<p>Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program.</p>\n<p>Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>\n<p>The typical starting salary range for this role is: $161,300-$255,100 USD</p>\n<p>The typical starting Target Variable range for this role is: $69,000-$109,200 USD</p>\n<p>The typical starting On-Target Earnings (OTE) range for this role is: $230,300-$364,300 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58945b73-e53","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7769905","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$161,300-$255,100 USD","x-skills-required":["Technical Acumen","Leadership","Strategic Planning","Market Knowledge","Collaboration"],"x-skills-preferred":["Distributed Datastores","Observability","Cybersecurity","Search","Vectors","Machine Learning","GenAI"],"datePosted":"2026-04-18T15:39:29.722Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Acumen, Leadership, Strategic Planning, Market Knowledge, Collaboration, Distributed Datastores, Observability, Cybersecurity, Search, Vectors, Machine Learning, GenAI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":161300,"maxValue":255100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_47f1040a-8a3"},"title":"Member of Technical Staff - Pre-Training","description":"<p>We&#39;re seeking a Member of Technical Staff - Pre-Training to join our small, highly motivated team at xAI. As a key member of our organisation, you will design and implement petabyte-scale, high-throughput data processing systems involving both CPU- and GPU-based processing. You will also design and implement tools for orchestrating complex data pipelines, improving data discoverability and data quality at scale, and building innovative data pipelines for creating high-quality training data.</p>\n<p>Our ideal candidate has strong systems skills in configuring and troubleshooting complex distributed data processing systems for maximum performance. They should be able to build bespoke data processing systems from scratch, prepare pre-training and post-training data for state-of-the-art large language models and generative models, and organise and meticulously bookkeep data across multiple clouds, modalities, and sources.</p>\n<p>In this role, you will work closely with our team to contribute directly to our mission and deliver excellence. You will be expected to have strong communication skills, be able to concisely and accurately share knowledge with your teammates, and demonstrate initiative and leadership.</p>\n<p>The base salary for this position is $180,000 - $440,000 USD, and we offer a comprehensive total rewards package including equity, medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_47f1040a-8a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4378344007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["configuring and troubleshooting complex distributed data processing systems","building bespoke data processing systems from scratch","preparing pre-training and post-training data for state-of-the-art large language models and generative models","organising and meticulously bookkeeping data across multiple clouds, modalities, and sources"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:23:12.460Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"configuring and troubleshooting complex distributed data processing systems, building bespoke data processing systems from scratch, preparing pre-training and post-training data for state-of-the-art large language models and generative models, organising and meticulously bookkeeping data across multiple clouds, modalities, and sources","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5f31c15e-6f9"},"title":"Data Analyst","description":"<p>Job Title: Data Analyst</p>\n<p>Role Overview:</p>\n<p>As a Data Analyst at Stripe, you will partner with teams across the company to ensure that our users, products, and business have the models, data products, and insights needed to make decisions and grow responsibly.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Work closely with partners to extract insights from Stripe&#39;s rich and complex data</li>\n<li>Translate business needs into data problems</li>\n<li>Build metrics, scalable data pipelines, dashboards, and reports to inform and run the business</li>\n<li>Deliver actionable business recommendations through analyses and data storytelling</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>MS/MA + 2 years or BS/BA + 3 years of full-time experience in Business Intelligence Engineering, Data Analyst, Business Analyst roles</li>\n<li>Proficiency in SQL</li>\n<li>Proven ability to manage and deliver on multiple projects with great attention to detail</li>\n<li>Ability to clearly communicate results and drive impact</li>\n<li>Experience collaborating with cross-functional teams to deliver strategic insights, benchmarks, and analyses that provide recommendations</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Opportunity to work with a vibrant community of data analysts and data scientists</li>\n<li>Variety of Data Analytics roles and teams across Stripe</li>\n<li>Alignment with the most relevant team based on background</li>\n</ul>\n<p>Note: The preferred qualifications are a bonus, not a requirement.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5f31c15e-6f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/5416444","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data pipelines","dashboards","reports","data storytelling"],"x-skills-preferred":["distributed data frameworks like Spark","Python","statistical knowledge","development processes and best practices"],"datePosted":"2026-03-31T18:07:44.053Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data pipelines, dashboards, reports, data storytelling, distributed data frameworks like Spark, Python, statistical knowledge, development processes and best practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8da705c0-ccb"},"title":"Principal Software Engineer","description":"<p>Are you passionate about building infrastructure that powers billions of ad impressions daily? Join us to shape the backbone of a rapidly growing ad platform—where scale, reliability, and data-driven innovation are at the heart of everything we do.</p>\n<p>As a Principal Software Engineer on the Bing Ads team, you will be responsible for designing and developing near real-time services, preparing data stores, and integrating them with other ad-serving components. Collaboration between and across teams is essential part of this role, as you will engage with partners to meet mutual objectives.</p>\n<p>This role will enable you to gain insights into the Bing ad serving platform, collaborate closely with data scientists, and develop expertise in working with individuals responsible for different components of the ad infrastructure. You will have the opportunity to grow your skills, learn from industry experts, and continuously expand your knowledge in a dynamic and innovative environment.</p>\n<p>This role allows flexible working hours with partial work from home.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Independently implement high-performance solutions across teams while maintaining a quality checklist.</li>\n<li>Create and monitor telemetry data and influence analytics to better identify patterns that reveal errors and unexpected problems.</li>\n<li>Lead by example and mentor others to produce extensible and maintainable code used across products.</li>\n<li>Spearhead efforts to optimize, debug, refactor, and reuse code to improve performance, maintainability, effectiveness, and return on investment (ROI).</li>\n<li>Oversee the design and development of products, identifying other teams and technologies that will be leveraged, how they will interact, and when your system may provide support to others.</li>\n<li>Lead efforts to determine back-end dependencies associated with the product, ensuring appropriate security and performance, driving reliability in the solutions, and optimizing dependency chains for the solution.</li>\n<li>Respond to incidents and complex issues by identifying and troubleshooting the issue, deploying the appropriate fixes, and implementing automations to prevent recurring issues.</li>\n<li>Follow prescriptive guidance for security, privacy, and compliance standards.</li>\n<li>Collaborate within and across teams by proactively and systematically sharing information.</li>\n<li>Resolve conflicts across teams and engage with partners to meet mutual objectives.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>\n<li>4+ years technical experience working with large-scale cloud or distributed data systems.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR Bachelor’s Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>\n<li>8+ years technical experience in software development, service engineering, or systems engineering.</li>\n<li>3+ years experience in data science, data modeling, or data engineering.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8da705c0-ccb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-10/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C#","Java","C","C++","Python","JavaScript","large-scale cloud or distributed data systems"],"x-skills-preferred":["data science","data modeling","data engineering"],"datePosted":"2026-03-08T22:12:21.013Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Multiple Locations, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C#, Java, C, C++, Python, JavaScript, large-scale cloud or distributed data systems, data science, data modeling, data engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d808aa6-75a"},"title":"Full Stack Engineer, Fleet Scheduling","description":"<p><strong>Full Stack Engineer, Fleet Scheduling</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong> Full Stack engineers within the Fleet Scheduling team are dedicated to building intuitive and scalable interfaces that empower researchers to efficiently manage AI workloads across some of the largest supercomputers in the world. Our focus is on developing robust, high-performance systems that provide real-time insights, resource tracking, and seamless interaction with complex infrastructure. We aim to optimize resource allocation, minimize operational overhead, and create user-friendly tools that enhance researcher productivity and system transparency.</p>\n<p><strong>About the Role</strong> You will design, develop, and operate web-based systems that provide a powerful and intuitive interface to OpenAI’s supercomputing clusters. You will collaborate closely with researcher, product and infrastructure teams to deliver scalable solutions that enable seamless monitoring, job scheduling, and resource management. This is an opportunity to work at the cutting edge of AI infrastructure, designing tools that scale to exascale workloads while maintaining usability and performance.</p>\n<p>This role is based in <strong>San Francisco, CA.</strong> We use a hybrid work model of <strong>3 days in the office per week</strong> and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and develop full-stack web applications to track, monitor, and manage large-scale AI workloads in real time.</li>\n</ul>\n<ul>\n<li>Collaborate with researchers and infrastructure teams to translate complex operational needs into intuitive UIs and scalable backends.</li>\n</ul>\n<ul>\n<li>Build data visualization tools (e.g., Gantt charts, dashboards) to provide insights into job scheduling and resource allocation.</li>\n</ul>\n<ul>\n<li>Optimize backend services to handle massive data throughput while ensuring low-latency performance and high availability.</li>\n</ul>\n<ul>\n<li>Implement frontend components that provide seamless interactions with scheduling, storage, and compute systems.</li>\n</ul>\n<ul>\n<li>Ensure system security, reliability, and scalability across globally distributed supercomputing infrastructure.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Significant experience in full-stack development, with expertise in modern frontend frameworks (React, Vue, or Angular) and backend technologies (Python, Go, or Node.js).</li>\n</ul>\n<ul>\n<li>Experienced in building scalable, high-performance web applications for complex distributed systems.</li>\n</ul>\n<ul>\n<li>Strong understanding of RESTful and GraphQL APIs, distributed databases, and cloud infrastructure (especially Azure).</li>\n</ul>\n<ul>\n<li>Execution-focused with a keen eye for usability, performance, and scalability in enterprise-scale systems.</li>\n</ul>\n<ul>\n<li>Comfortable working in fast-paced, highly collaborative environments with tight timelines and evolving priorities.</li>\n</ul>\n<p><strong>Bonus points if you:</strong></p>\n<ul>\n<li>Have experience working with Kubernetes, Docker, and cloud-native application deployment.</li>\n</ul>\n<ul>\n<li>Understand AI/ML workload scheduling and orchestration challenges.</li>\n</ul>\n<ul>\n<li>Have experience with real-time data processing, visualization libraries, and observability tooling.</li>\n</ul>\n<p><strong>About OpenAI</strong> OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d808aa6-75a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9d11e1d8-af1d-413b-873f-d8fac2bdee99","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $490K • Offers Equity","x-skills-required":["full-stack development","modern frontend frameworks","backend technologies","RESTful and GraphQL APIs","distributed databases","cloud infrastructure","Kubernetes","Docker","cloud-native application deployment","AI/ML workload scheduling","orchestration challenges","real-time data processing","visualization libraries","observability tooling"],"x-skills-preferred":["React","Vue","Angular","Python","Go","Node.js","Azure","Gantt charts","dashboards"],"datePosted":"2026-03-06T18:40:33.689Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"full-stack development, modern frontend frameworks, backend technologies, RESTful and GraphQL APIs, distributed databases, cloud infrastructure, Kubernetes, Docker, cloud-native application deployment, AI/ML workload scheduling, orchestration challenges, real-time data processing, visualization libraries, observability tooling, React, Vue, Angular, Python, Go, Node.js, Azure, Gantt charts, dashboards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32b83135-974"},"title":"Software Engineer, Data Infrastructure - Research","description":"<p><strong>Software Engineer, Data Infrastructure - Research</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$250K – $380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>The Workload team is responsible for designing and running OpenAI’s LLM training and inference infrastructure that powers frontier models at massive scale. Our systems unify how researchers train and serve models, abstracting away the complexity of performance, parallelism, and execution across vast GPU/accelerator fleets. By providing this foundation, the Workload team ensures that researchers can focus on advancing model capabilities while we handle the scale, efficiency, and reliability required to bring those models to life.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We are looking for an engineer to design and implement the dataset infrastructure that powers OpenAI’s next-generation training stack. You will be responsible for building standardized dataset interfaces, scaling pipelines across thousands of GPUs, and proactively testing performance bottlenecks. In this role, you will collaborate closely with the multimodal researchers, and other infra groups to ensure datasets are unified, efficient, and easy to consume.</p>\n<p><strong><strong>In this role, you will:</strong></strong></p>\n<ul>\n<li>Design and maintain standardized dataset APIs, including for multimodal (MM) data that cannot fit in memory.</li>\n</ul>\n<ul>\n<li>Build proactive testing and scale validation pipelines for dataset loading at GPU scale.</li>\n</ul>\n<ul>\n<li>Collaborate with teammates to integrate datasets seamlessly into training and inference pipelines, ensuring smooth adoption and a great user experience.</li>\n</ul>\n<ul>\n<li>Document and maintain dataset interfaces so they are discoverable, consistent, and easy for other teams to adopt.</li>\n</ul>\n<ul>\n<li>Establish safeguards and validation systems to ensure datasets remain reproducible and unchanged once standardized.</li>\n</ul>\n<ul>\n<li>Debug and resolve performance bottlenecks in distributed dataset loading (e.g., straggler systems slowing global training).</li>\n</ul>\n<ul>\n<li>Provide visualization and inspection tools to surface errors, bugs, or bottlenecks in datasets.</li>\n</ul>\n<p><strong><strong>You might thrive in this role if you:</strong></strong></p>\n<ul>\n<li>Have strong engineering fundamentals with experience in distributed systems, data pipelines, or infrastructure.</li>\n</ul>\n<ul>\n<li>Have experience building APIs, modular code, and scalable abstractions, while recognizing that abstractions ultimately serve the users and UX is an important part of the abstractions design.</li>\n</ul>\n<ul>\n<li>Are comfortable debugging bottlenecks across large fleets of machines.</li>\n</ul>\n<ul>\n<li>Take pride in building infrastructure that “just works,” and find joy in being the guardian of reliability and scale.</li>\n</ul>\n<ul>\n<li>Are collaborative, humble, and excited to own a foundational (if not glamorous) part of the ML stack.</li>\n</ul>\n<p><strong>Bonus points if you:</strong></p>\n<ul>\n<li>Have background knowledge in data math, probability, or distributed data theory.</li>\n</ul>\n<ul>\n<li>Have worked with GPU-scale distributed systems or dataset scaling for real-time data</li>\n</ul>\n<p><strong><strong>About OpenAI</strong></strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32b83135-974","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/b7a2e30f-c5f6-4710-b53e-64d64bcce189","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$250K – $380K • Offers Equity","x-skills-required":["distributed systems","data pipelines","infrastructure","APIs","modular code","scalable abstractions","data math","probability","distributed data theory","GPU-scale distributed systems","dataset scaling"],"x-skills-preferred":["collaborative","humble","excited to own a foundational part of the ML stack"],"datePosted":"2026-03-06T18:30:12.517Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, data pipelines, infrastructure, APIs, modular code, scalable abstractions, data math, probability, distributed data theory, GPU-scale distributed systems, dataset scaling, collaborative, humble, excited to own a foundational part of the ML stack","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":250000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_448a56f3-ab5"},"title":"Director of Data Engineering and Agentic AI Automation, Finance","description":"<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Finance</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$347K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>\n</ul>\n<ul>\n<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>\n</ul>\n<ul>\n<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>\n</ul>\n<ul>\n<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>\n</ul>\n<ul>\n<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>\n</ul>\n<ul>\n<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>\n</ul>\n<ul>\n<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>\n</ul>\n<ul>\n<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>\n</ul>\n<ul>\n<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>\n</ul>\n<ul>\n<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>\n</ul>\n<ul>\n<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>\n</ul>\n<ul>\n<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>\n</ul>\n<ul>\n<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>\n</ul>\n<ul>\n<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>\n</ul>\n<ul>\n<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>\n</ul>\n<ul>\n<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>\n</ul>\n<ul>\n<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>\n</ul>\n<ul>\n<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>\n</ul>\n<ul>\n<li>Strong track record of partnering with senior business stakeholders</li>\n</ul>\n<p><strong>Work Environment</strong></p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_448a56f3-ab5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$347K – $490K • Offers Equity","x-skills-required":["SQL","Python","Apache Spark","Kafka","cloud-native storage","data modeling","orchestration frameworks","distributed data processing technologies","enterprise data architecture","financial processes","supply chain data models"],"x-skills-preferred":["ETL pipelines","complex datasets","schema design","data engineering","data infrastructure","auditable data","revenue recognition","financial reporting","planning","ERP","planning","operational systems","Oracle Fusion","Anaplan","Workday","data marts","products","stakeholders","Revenue","FP&A","Tax","Procurement","Hardware Accounting","Controller","data modeling","lineage","observability","reconciliation","finance data domains","team structure","engineers","contractors","system integrators","predictive analytics","autonomous agent workflows","large-scale automation"],"datePosted":"2026-03-06T18:27:50.931Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":347000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61433df5-3e7"},"title":"Member of Technical Staff, Multimodal Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Multimodal Infrastructure to help build the next wave of capabilities of our personalized AI assistant, Copilot. We&#39;re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a highly skilled and experienced engineer to join our team as a Member of Technical Staff, Multimodal Infrastructure. The successful candidate will be responsible for designing, developing, and maintaining large-scale multimodal data processing pipelines, model pretraining and post-training frameworks, and model inference and serving frameworks. They will work closely with research scientists and product engineers to solve infra-related problems and find a path to get things done despite roadblocks to get their work into the hands of users quickly and iteratively.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, develop, and maintain large-scale multimodal data processing pipelines.</li>\n<li>Design, develop, and maintain large-scale multimodal model pretraining and post-training frameworks.</li>\n<li>Design, develop, and maintain large-scale multimodal model inference and serving frameworks.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Strong proficiency in distributed data processing infra (resource utilization management, fault tolerance, ray &amp; spark) and CPU/GPU batch processing optimizations.</li>\n<li>Experience with state-of-art model inference and serving frameworks.</li>\n<li>Experience with image/video/audio data processing.</li>\n<li>Experience with common data formats for efficient I/O.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n<li>Access to cutting-edge technology and tools.</li>\n<li>Flexible work arrangements.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61433df5-3e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-multimodal-infrastructure-mai-superintelligence-team-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Distributed data processing infra","CPU/GPU batch processing optimizations","State-of-art model inference and serving frameworks","Image/video/audio data processing","Common data formats for efficient I/O"],"x-skills-preferred":["Ray & spark","TensorRT-LLM","SGLang","xDiT","Cache-DiT"],"datePosted":"2026-03-06T07:31:26.327Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Distributed data processing infra, CPU/GPU batch processing optimizations, State-of-art model inference and serving frameworks, Image/video/audio data processing, Common data formats for efficient I/O, Ray & spark, TensorRT-LLM, SGLang, xDiT, Cache-DiT"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cfee4a87-9c7"},"title":"Member of Technical Staff, Multimodal Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Multimodal Infrastructure to help build the next wave of capabilities of our personalized AI assistant, Copilot. We&#39;re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for someone who will design, develop and maintain large-scale multimodal data processing pipelines, model pretraining and post-training frameworks, and model inference and serving frameworks. You will work closely with research scientists and product engineers on multimodal data processing, model training, inference and serving tasks. As a contributing member of the core group of engineers, you would also bring to the table best practices driving architectural changes and influence roadmap of relevant software and hardware components.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, develop and maintain large-scale multimodal data processing pipelines.</li>\n<li>Design, develop and maintain large-scale multimodal model pretraining and post-training frameworks.</li>\n<li>Design, develop and maintain large-scale multimodal model inference and serving frameworks.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Strong proficiency in distributed data processing infra (resource utilization management, fault tolerance, ray &amp; spark) and CPU/GPU batch processing optimizations.</li>\n<li>Experience with state-of-art model inference and serving frameworks.</li>\n<li>Experience with image/video/audio data processing.</li>\n<li>Experience with common data formats for efficient I/O.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>\n<li>Comprehensive health and wellbeing benefits.</li>\n<li>Professional development opportunities.</li>\n<li>Financial benefits (bonus, equity, pension, etc.).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cfee4a87-9c7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-multimodal-infrastructure-mai-superintelligence-team-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","JavaScript","Python","Distributed data processing infra","CPU/GPU batch processing optimizations","State-of-art model inference and serving frameworks","Image/video/audio data processing","Common data formats for efficient I/O"],"x-skills-preferred":["Deep learning frameworks","Auto-regressive and diffusion transformer models","Distributed training techniques","Image/video generation and editing","Efficient architectures","Efficient model design","Reinforcement learning training methods"],"datePosted":"2026-03-06T07:31:05.608Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Distributed data processing infra, CPU/GPU batch processing optimizations, State-of-art model inference and serving frameworks, Image/video/audio data processing, Common data formats for efficient I/O, Deep learning frameworks, Auto-regressive and diffusion transformer models, Distributed training techniques, Image/video generation and editing, Efficient architectures, Efficient model design, Reinforcement learning training methods"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a82f064b-623"},"title":"Member of Technical Staff, Multimodal Infrastructure","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Multimodal Infrastructure to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff, Multimodal Infrastructure, you will be responsible for designing, developing, and maintaining large-scale multimodal data processing pipelines, model pretraining and post-training frameworks, and model inference and serving frameworks. You will work closely with research scientists and product engineers to solve infra-related problems and find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design, develop, and maintain large-scale multimodal data processing pipelines.</li>\n<li>Design, develop, and maintain large-scale multimodal model pretraining and post-training frameworks.</li>\n<li>Design, develop, and maintain large-scale multimodal model inference and serving frameworks.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Strong proficiency in distributed data processing infra (resource utilization management, fault tolerance, ray &amp; spark) and CPU/GPU batch processing optimizations.</li>\n<li>Experience with state-of-art model inference and serving frameworks.</li>\n<li>Experience with image/video/audio data processing.</li>\n<li>Experience with common data formats for efficient I/O.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>\n<li>Comprehensive health and wellbeing benefits.</li>\n<li>Professional development opportunities.</li>\n<li>Financial benefits (bonus, equity, pension, etc.).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a82f064b-623","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-multimodal-infrastructure-mai-superintelligence-team/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","JavaScript","Python","Distributed data processing infra","CPU/GPU batch processing optimizations","State-of-art model inference and serving frameworks","Image/video/audio data processing","Common data formats for efficient I/O"],"x-skills-preferred":["Auto-regressive and diffusion transformer models","Distributed training techniques","Image/video generation and editing","Efficient architectures","Efficient model design","Reinforcement learning training methods"],"datePosted":"2026-03-06T07:30:19.312Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Distributed data processing infra, CPU/GPU batch processing optimizations, State-of-art model inference and serving frameworks, Image/video/audio data processing, Common data formats for efficient I/O, Auto-regressive and diffusion transformer models, Distributed training techniques, Image/video generation and editing, Efficient architectures, Efficient model design, Reinforcement learning training methods"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2902359a-64d"},"title":"Member of Technical Staff, Infrastructure Data & Analytics","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Infrastructure Data &amp; Analytics to join their MAI SuperIntelligence Team. This role sits at the heart of strategic decision-making, turning raw telemetry into trusted, decision-quality insights on utilization, capacity, readiness, and efficiency. You&#39;ll work directly with leadership to shape the company&#39;s direction in the Superintelligence space.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff, Infrastructure Data &amp; Analytics, you will act as the technical lead and owner for infrastructure analytics across compute, storage, and networking. You will design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse. You will define and standardize core metrics and semantics (e.g., utilization, occupancy, MFU, goodput, capacity readiness, delivery-to-production). You will architect and maintain self-service dashboards and APIs for fleet, cluster, and squad-level visibility. You will partner closely with stakeholders across Supercomputing Infra, Researchers, Strategy and Executives to ensure metrics reflect operational and business reality.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Act as the technical lead and owner for infrastructure analytics across compute, storage, and networking.</li>\n<li>Design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>8+ years technical engineering experience with data engineering, analytics, or data science, with increasing technical ownership in startup environment.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Distributed data processing frameworks and large-scale data systems.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication skills; can explain complex systems clearly to senior leader.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Certain roles may be eligible for benefits and other compensation.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2902359a-64d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-infrastructure-data-analytics-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["data engineering","analytics","data science","distributed data processing frameworks","large-scale data systems"],"x-skills-preferred":["ETL orchestration frameworks","Airflow","Dagster"],"datePosted":"2026-03-06T07:29:22.881Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Multiple Locations, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, analytics, data science, distributed data processing frameworks, large-scale data systems, ETL orchestration frameworks, Airflow, Dagster","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_91ae81f0-b2b"},"title":"Data Engineer II","description":"<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>\n<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_91ae81f0-b2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212291","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Data Engineering"],"x-skills-preferred":["Cloud Data Warehouses","Distributed data processing frameworks","Real-time/streaming data technologies"],"datePosted":"2026-02-04T13:04:31.703Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_779ffd11-5cf"},"title":"Data Engineer II","description":"<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>\n<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_779ffd11-5cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212287","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Data Engineering"],"x-skills-preferred":["Cloud Data Warehouses","Distributed data processing frameworks","Real-time/streaming data technologies"],"datePosted":"2026-02-04T13:04:25.943Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d91f2ddd-f1b"},"title":"Data Engineer II","description":"<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>\n<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d91f2ddd-f1b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212288","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Data Engineering"],"x-skills-preferred":["Cloud Data Warehouses","Distributed data processing frameworks","Real-time/streaming data technologies"],"datePosted":"2026-02-04T13:04:00.305Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies"}]}