{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/obs"},"x-facet":{"type":"skill","slug":"obs","display":"Obs","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd9adbf8-d96"},"title":"Werkstudent (m/w/d) Modellbasierte Analyse Fahrzeugbremsen","description":"<p>As a Werkstudent (part-time student assistant) at Porsche, you will support the analysis and evaluation of brake pad friction behavior. Your tasks will include working on model-based approaches to determine friction values, supporting the investigation of suitable physical models, filter, and observer concepts, and assisting in the simulation, evaluation, and preparation of technical results. You will also contribute to documentation and conceptual development.</p>\n<p>To be successful in this role, you should have a strong background in vehicle technology, mechanical engineering, mechatronics, electrical engineering, or a related field. You should also have basic knowledge of system modeling, control engineering, or signal processing. Analytical, structured, and independent work habits are essential, as well as an interest in technical and scientific questions. Additionally, you should have experience with model building and system analysis, as well as knowledge of regulation and observer theory (foundations).</p>\n<p>As a part-time student assistant, you will work approximately 20 hours per week during the semester. You will be integrated into our team and contribute to the development of innovative solutions. We offer a dynamic and supportive work environment, with opportunities for professional growth and development.</p>\n<p>If you are a motivated and detail-oriented individual with a passion for technical work, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd9adbf8-d96","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dr. Ing. h.c. F. Porsche AG","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20463","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"part-time","x-salary-range":null,"x-skills-required":["System modeling","Control engineering","Signal processing","Model building","System analysis","Regulation theory","Observer theory"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:31:13.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Weissach"}},"employmentType":"PART_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"System modeling, Control engineering, Signal processing, Model building, System analysis, Regulation theory, Observer theory"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bee517db-e9c"},"title":"DevOps Engineer (all genders)","description":"<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>\n<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>\n<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>\n<li>Container Orchestration: Kubernetes with Helm</li>\n<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>\n<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>\n<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>\n<li>Scripting: Python, Go, Bash</li>\n<li>Version Control: GitHub</li>\n<li>Collaboration: Jira (Agile)</li>\n<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a DevOps Engineer, you will be responsible for:</p>\n<ul>\n<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>\n<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>\n<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>\n<li>Troubleshooting production issues and participating in capacity planning</li>\n<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>\n<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>\n<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>\n<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>\n<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>\n<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>\n<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>\n<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>\n<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>\n<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>\n<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>\n<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>\n<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>\n<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with Helm charts and Kubernetes package management</li>\n<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>\n<li>Experience with designing AWS services-based architectures is a plus</li>\n<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>\n<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>\n<li>Exposure to cost optimization strategies for cloud infrastructure</li>\n<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>\n<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>\n<li>Technology: Work in a modern tech environment</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bee517db-e9c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2595036","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Cloud","Container Orchestration","Infrastructure as Code","Monitoring & Observability","CI/CD","Scripting","Version Control","Collaboration","Automation"],"x-skills-preferred":["Helm","GitOps","AI automation","Low-code/no-code platforms","Prompt engineering","Cost optimization strategies","Incident response","SRE practices","DevSecOps practices"],"datePosted":"2026-04-18T22:14:30.429Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud, Container Orchestration, Infrastructure as Code, Monitoring & Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_34fa7d64-89a"},"title":"Technical Product Manager - Linux Developer Experience","description":"<p>We&#39;re seeking a Technical Product Manager to join our team responsible for shaping and evolving the developer experience on our firm&#39;s developer platform.</p>\n<p>In this pivotal role, you&#39;ll serve as the primary liaison between the platform engineering team and our developer community , including quantitative analysts, researchers, and front-office trading teams , ensuring the platform meets their complex development needs and continuously improves.</p>\n<p>The Developer Platform team architects, engineers, and enhances the firm&#39;s developer’s toolchain and workflow. We collaborate closely with developers, quants, researchers, and front-office trading teams to ensure our platform provides a best-in-class development experience with the feel of native Mac/UNIX-like development.</p>\n<p>This role sits at the intersection of product management and technical enablement, acting as the voice of the developer within the platform team.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and maintain relationships with technologists and developers across the firm to deeply understand their workflows, pain points, and emerging needs</li>\n</ul>\n<ul>\n<li>Discover novel use cases and translate them into actionable product requirements for the platform engineering team</li>\n</ul>\n<ul>\n<li>Serve as the first point of contact for developer questions about the platform&#39;s environment, tooling, and capabilities</li>\n</ul>\n<ul>\n<li>Triage and reproduce issues reported by developers, driving initial diagnosis , including leveraging AI-assisted sessions for problem analysis , and escalating to the deeper technical engineering team when necessary</li>\n</ul>\n<ul>\n<li>Drive the roadmap and prioritization of platform enhancements in collaboration with engineering leadership</li>\n</ul>\n<ul>\n<li>Promote and evangelize the Linux developer platform , driving adoption and ensuring developers are aware of available features and best practices</li>\n</ul>\n<ul>\n<li>Manage project timelines, stakeholder communication, and delivery milestones for platform initiatives</li>\n</ul>\n<p>Qualifications / Skills Required:</p>\n<ul>\n<li>Demonstrated experience in Technical Product Management, Technical Project Management, or Developer Relations/Developer Experience roles</li>\n</ul>\n<ul>\n<li>Strong communication and stakeholder management skills , ability to engage credibly with both highly technical developers and senior leadership</li>\n</ul>\n<ul>\n<li>Working familiarity with Linux desktop environments , comfortable navigating the platform, understanding developer workflows, and answering environment/tooling questions</li>\n</ul>\n<ul>\n<li>Conceptual understanding of containerization and orchestration (Docker, Podman, Kubernetes) and how developers leverage these tools in their workflows</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD concepts and tools (e.g., Jenkins, Git) , enough to understand developer pipelines and identify friction points</li>\n</ul>\n<ul>\n<li>Problem reproduction and triage skills , ability to recreate reported issues in the environment and clearly document/escalate to engineering with relevant context</li>\n</ul>\n<ul>\n<li>Experience leveraging AI tools (e.g., LLM-based assistants, copilots) to assist in problem diagnosis, research, and knowledge synthesis</li>\n</ul>\n<ul>\n<li>Basic scripting literacy (Bash, Python) , enough to read, understand, and run existing scripts; not necessarily write complex automation from scratch</li>\n</ul>\n<p>Qualifications / Skills Desired:</p>\n<ul>\n<li>Familiarity with serverless compute concepts and cloud-native development paradigms</li>\n</ul>\n<ul>\n<li>Exposure to configuration management tools (e.g., Ansible) and image lifecycle management (e.g., Hashicorp Packer) , understanding what they do and how they fit into the platform, rather than hands-on administration</li>\n</ul>\n<ul>\n<li>Awareness of monitoring and observability tools (Prometheus, Grafana, ELK stack) from a user/consumer perspective</li>\n</ul>\n<ul>\n<li>Understanding of authentication and identity management concepts (e.g., Active Directory integration) as they relate to developer access and workflows</li>\n</ul>\n<ul>\n<li>Experience with agile project management methodologies and tools (Jira, Confluence, or similar)</li>\n</ul>\n<ul>\n<li>Strong communication skills working with engineering leadership, developer community, and stakeholders</li>\n</ul>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_34fa7d64-89a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953932410","x-work-arrangement":null,"x-experience-level":null,"x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Technical Product Management","Technical Project Management","Developer Relations/Developer Experience","Linux desktop environments","Containerization and orchestration","CI/CD concepts and tools","Problem reproduction and triage skills","AI tools","Basic scripting literacy"],"x-skills-preferred":["Serverless compute concepts and cloud-native development paradigms","Configuration management tools","Image lifecycle management","Monitoring and observability tools","Authentication and identity management concepts","Agile project management methodologies and tools"],"datePosted":"2026-04-18T22:13:03.074Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Technical Product Management, Technical Project Management, Developer Relations/Developer Experience, Linux desktop environments, Containerization and orchestration, CI/CD concepts and tools, Problem reproduction and triage skills, AI tools, Basic scripting literacy, Serverless compute concepts and cloud-native development paradigms, Configuration management tools, Image lifecycle management, Monitoring and observability tools, Authentication and identity management concepts, Agile project management methodologies and tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4d1760ee-59e"},"title":"Medical Science Liaison  (Respiratory) - Kansas City","description":"<p>At AstraZeneca, we put patients first and strive to meet their unmet needs worldwide. As a Medical Science Liaison (Respiratory) in Kansas City, you will be part of a field-based team with scientific, clinical, and therapeutic expertise.</p>\n<p>Your primary responsibility will be to provide medical and scientific support for AstraZeneca&#39;s marketed products, focused on Chronic Obstructive Pulmonary Disease (COPD), with new indications and compounds in development. This involves developing relationships and engaging in scientific exchange with medical and scientific partners, including Healthcare Professionals, Quality Assurance Managers, and Population Health Experts to improve medical care for patients with COPD.</p>\n<p>You will partner with key internal stakeholders, including Medical, Sales, and Marketing, to identify pre-clinical, clinical, and post-marketing study investigators in alignment with Medical Affairs objectives. Your goal will be to provide impactful information that enhances the value and proper use of AstraZeneca&#39;s products to internal partners.</p>\n<p>In addition, you will respond to customer inquiries to provide focused and balanced clinical and scientific information that supports the appropriate use of or clinically differentiates AstraZeneca&#39;s products and services.</p>\n<p>This position is based in Kansas City, covering Kansas and Central/Western Missouri. You must reside within 30 miles of the territory core.</p>\n<p>Essential requirements include:</p>\n<ul>\n<li>0-1 years&#39; experience as a Medical Science Liaison in the pharmaceutical industry</li>\n<li>Advanced Clinical/Science Degree required (e.g., MD, PharmD, PhD, MSN, NP, PA, etc.)</li>\n<li>Knowledge of Health Systems, customer segments, and market dynamics</li>\n<li>Demonstrated expertise in discussing scientific content and context to multiple audiences</li>\n<li>Experience initiating Transforming Care within Health Systems</li>\n<li>Excellent project management ability</li>\n<li>Excellent oral and written communication and interpersonal skills</li>\n<li>Working knowledge of Microsoft Office Suite</li>\n<li>Thorough knowledge of regulatory environment</li>\n<li>Experience in the pharmaceutical industry</li>\n<li>Strong leadership capabilities</li>\n<li>Must reside in territory</li>\n<li>Ability to travel (50-70%)</li>\n</ul>\n<p>Desirable requirements include:</p>\n<ul>\n<li>2+ years&#39; experience as a Medical Science Liaison in the pharmaceutical industry</li>\n<li>Respiratory experience in pharmaceutical industry or clinical practice</li>\n<li>Established track record of basic or clinical research</li>\n<li>Proven expertise in the drug discovery and drug development process</li>\n<li>Previous experience working within Health Systems or Integrated Delivery Networks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4d1760ee-59e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Regional Liaison Director","sameAs":"https://astrazeneca.eightfold.ai","logo":"https://logos.yubhub.co/astrazeneca.eightfold.ai.png"},"x-apply-url":"https://astrazeneca.eightfold.ai/careers/job/563877689883893","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Medical Science Liaison","Chronic Obstructive Pulmonary Disease (COPD)","Healthcare Professionals","Quality Assurance Managers","Population Health Experts","Microsoft Office Suite","Regulatory Environment"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:38.215Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Kansas City, Kansas, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Medical","industry":"Healthcare","skills":"Medical Science Liaison, Chronic Obstructive Pulmonary Disease (COPD), Healthcare Professionals, Quality Assurance Managers, Population Health Experts, Microsoft Office Suite, Regulatory Environment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_83bd1570-20c"},"title":"Executive Medical Director, Specialty and Pipeline","description":"<p>At Bayer, we&#39;re seeking an Executive Medical Director, Specialty and Pipeline to oversee the development and execution of high-quality medical strategy for our pipeline portfolio.</p>\n<p>As a critical member of the US Medical Affairs team, you will be responsible for providing scientific leadership, external engagement, and cross-functional influence to advance evidence generation, medical education, and development of medical affairs strategy to support commercialization strategic plans of the portfolio.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Overseeing US medical affairs strategy and contributions into the Bayer pipeline portfolio in collaboration with medical directors and in alignment with the VP of Specialty and Pipeline TA.</li>\n<li>Providing input, appropriate to the Phase of development of a product to ensure US perspective and needs are incorporated into development strategy.</li>\n<li>Active participation and effective collaboration with global teams to assure the efficient and expedient conduct of clinical development programs, aligning them with strategic priorities that support appropriate US direction of the Life Cycle Management strategy.</li>\n<li>Working collaboratively with US New Product Commercialization, Market Access and US regulatory teams to provide expert medical input into strategic plans.</li>\n<li>Serving as a member of Product Team and/or Clinical Strategy Team leadership of multiple assets in different therapeutic areas.</li>\n<li>Supporting development and execution of the US medical strategy, offering critical inputs during design and throughout the end-to-end execution of programs, in alignment with senior medical leadership.</li>\n<li>Collaborating cross-functionally with Global Evidence Generation, Clinical Development, Regulatory, Commercial and other medical affairs partners to develop and implement the Integrated Evidence Plans to improve the value proposition for the portfolio.</li>\n<li>Contributing to publication planning, data interpretation, and scientific dissemination in the US.</li>\n<li>Providing medical scientific input for brand and program documents, including integrated disease area plans, medical information documents, drug safety reporting documents, etc, while ensuring design and execution of all medical activities are according to internal and external compliance guidelines.</li>\n<li>Monitoring and understanding implications of evolving competitor landscape to inform medical strategy.</li>\n<li>Supporting completion of annual New Drug Application (NDA) reports for respective brands through evaluation of clinical data and literature and provide US Medical Affairs input in the preparation of key medical documents for INDs and NDAs.</li>\n</ul>\n<p>Additionally, you will:</p>\n<ul>\n<li>Develop and guide local Thought Leader (TL) engagement strategy, together with cross-functional partners.</li>\n<li>Serve as the US medical expert engaging thought leaders, academic institutions, medical societies, and patient advocacy groups to advance scientific leadership and collaboration.</li>\n<li>Lead and support advisory boards, including agenda development, faculty engagement and synthesis of insights to help inform medical strategy.</li>\n<li>Represent US Medical Affairs at major congresses, symposia and scientific forums.</li>\n</ul>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>An M.D. or D.O. degree.</li>\n<li>Agility and ability to flex into different therapeutic areas.</li>\n<li>Clinically relevant work experience or independent research experience or equivalent or experience in a pharmaceutical related industry.</li>\n<li>Experience working in or deep understanding of in-hospital consideration in US healthcare delivery.</li>\n<li>Deep understanding of clinical trial design, analysis and interpretation as well as the principles of observational studies and health economics/ outcomes research.</li>\n<li>Robust understanding of regulatory and market access considerations and triangulating those to implications on clinical trial design and clinical care delivery.</li>\n<li>Proven ability for strategic planning along with operations skill and experience related to clinical research involving both single and multiple centers.</li>\n<li>Strong ability to quickly build meaningful and trusting relationships, both internally and externally to the organization.</li>\n<li>Understanding of the drug development process over different stages.</li>\n<li>Strong ability to connect and collaborate across different functions and background, both internally and externally to the organization.</li>\n<li>Innate ability to lead others without formal authority, with demonstrated experience guiding teams from design to implementation of strategic initiatives.</li>\n<li>Excellent communication skills, both verbal and in written.</li>\n<li>Willingness and ability to travel as business dictates, both for internal and external functions.</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Board certification or board eligibility in cardiovascular, neurology, critical care medicine.</li>\n<li>7 years work experience in the pharmaceutical sector in Medical Affairs, Clinical Development or related positions.</li>\n<li>Experience in the field of medical support of a product portfolio across multiple therapeutic areas.</li>\n<li>Experience in leading and participating in teams across cultures and geographies, prior experience in global medical launch planning activities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_83bd1570-20c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976864247","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$248,000 to $372,000","x-skills-required":["M.D. or D.O. degree","Agility and ability to flex into different therapeutic areas","Clinically relevant work experience or independent research experience or equivalent or experience in a pharmaceutical related industry","Experience working in or deep understanding of in-hospital consideration in US healthcare delivery","Deep understanding of clinical trial design, analysis and interpretation as well as the principles of observational studies and health economics/ outcomes research","Robust understanding of regulatory and market access considerations and triangulating those to implications on clinical trial design and clinical care delivery","Proven ability for strategic planning along with operations skill and experience related to clinical research involving both single and multiple centers","Strong ability to quickly build meaningful and trusting relationships, both internally and externally to the organization","Understanding of the drug development process over different stages","Strong ability to connect and collaborate across different functions and background, both internally and externally to the organization","Innate ability to lead others without formal authority, with demonstrated experience guiding teams from design to implementation of strategic initiatives","Excellent communication skills, both verbal and in written","Willingness and ability to travel as business dictates, both for internal and external functions"],"x-skills-preferred":["Board certification or board eligibility in cardiovascular, neurology, critical care medicine","7 years work experience in the pharmaceutical sector in Medical Affairs, Clinical Development or related positions","Experience in the field of medical support of a product portfolio across multiple therapeutic areas","Experience in leading and participating in teams across cultures and geographies, prior experience in global medical launch planning activities"],"datePosted":"2026-04-18T22:12:29.977Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Medical","industry":"Pharmaceuticals","skills":"M.D. or D.O. degree, Agility and ability to flex into different therapeutic areas, Clinically relevant work experience or independent research experience or equivalent or experience in a pharmaceutical related industry, Experience working in or deep understanding of in-hospital consideration in US healthcare delivery, Deep understanding of clinical trial design, analysis and interpretation as well as the principles of observational studies and health economics/ outcomes research, Robust understanding of regulatory and market access considerations and triangulating those to implications on clinical trial design and clinical care delivery, Proven ability for strategic planning along with operations skill and experience related to clinical research involving both single and multiple centers, Strong ability to quickly build meaningful and trusting relationships, both internally and externally to the organization, Understanding of the drug development process over different stages, Strong ability to connect and collaborate across different functions and background, both internally and externally to the organization, Innate ability to lead others without formal authority, with demonstrated experience guiding teams from design to implementation of strategic initiatives, Excellent communication skills, both verbal and in written, Willingness and ability to travel as business dictates, both for internal and external functions, Board certification or board eligibility in cardiovascular, neurology, critical care medicine, 7 years work experience in the pharmaceutical sector in Medical Affairs, Clinical Development or related positions, Experience in the field of medical support of a product portfolio across multiple therapeutic areas, Experience in leading and participating in teams across cultures and geographies, prior experience in global medical launch planning activities","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":248000,"maxValue":372000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd72627b-b6d"},"title":"Senior Medical Director, Stroke and Thrombosis","description":"<p>At Bayer, we&#39;re seeking a Senior Medical Director, Stroke and Thrombosis to join our US Medical Affairs team. As a critical member of the team, you will be responsible for developing and executing high-quality medical strategy for the Stroke and Thrombosis portfolio through scientific leadership, external engagement, and cross-functional influence.</p>\n<p>Your core responsibilities will include:</p>\n<p>External Scientific Leadership and Engagement: Developing and guiding local Thought Leader (TL) management strategy, serving as the US medical expert for asundexian engaging thought leaders, stroke centers, academic institutions, medical societies, and patient advocacy groups to advance scientific leadership and collaboration.</p>\n<p>Medical Strategy, Evidence and Internal Leadership: Supporting development and execution of the US medical strategy, collaborating cross-functionally with Global Evidence Generation, Clinical Development, Regulatory, Commercial and other medical affairs partners to develop and implement the Integrated Evidence Plans to improve the value proposition for the portfolio.</p>\n<p>Additional responsibilities include:</p>\n<p>Providing medical leadership for scientific communications and publications, serving as a representative on cross-functional strategy teams.</p>\n<p>Contribute to publication planning, data interpretation, and scientific dissemination in the US.</p>\n<p>Active participation and effective collaboration with global teams to assure the efficient and expedient conduct of clinical development programs, aligning them with strategic priorities that support appropriate US direction of the Life Cycle Management strategy.</p>\n<p>Support IIR, research collaborations, Phase 4, post-marketing, post-hoc analyses, real-world evidence activities (including scientific review, study and analyses design, feasibility assessment, data interpretation, and ongoing oversight).</p>\n<p>Advance implementation science initiatives.</p>\n<p>Provide medical scientific input for brand and program documents, including integrated disease area plans, medical information documents, drug safety reporting documents, etc, while ensuring design and execution of all medical activities are according to internal and external compliance guidelines.</p>\n<p>Monitor and understand implications of evolving competitor landscape to inform medical strategy.</p>\n<p>Support completion of annual New Drug Application (NDA) reports for respective brands through evaluation of clinical data and literature and provide US Medical Affairs input in the preparation of key medical documents for INDs and NDAs.</p>\n<p>To be successful in this role, you will need to possess the following qualifications:</p>\n<p>M.D. or D.O. required.</p>\n<p>Disease and therapeutic area knowledge in both existing drugs and new fields of exploration and clinically relevant work experience or independent research experience or equivalent or experience in a pharmaceutical related industry.</p>\n<p>Deep understanding of clinical study design, analysis and interpretation as well as the principles of observational studies and health economics/ outcomes research.</p>\n<p>Proven ability for strategic planning along with operations skill and experience related to clinical research involving both single and multiple centers.</p>\n<p>Strong ability to quickly build meaningful and trusting relationships, both internally and externally to the organization.</p>\n<p>Understanding of the drug development process over different stages.</p>\n<p>Strong ability to connect and collaborate across different functions and background, both internally and externally to the organization.</p>\n<p>Innate ability to lead others without formal authority, with demonstrated experience guiding teams from design to implementation of strategic initiatives.</p>\n<p>Excellent communication skills, both verbal and in written.</p>\n<p>Willingness and ability to travel as business dictates, both for internal and external functions.</p>\n<p>Preferred qualifications include:</p>\n<p>High preference to be board certified or board eligible in Vascular Neurology or Neurology or relevant specialty.</p>\n<p>7 years work experience in the pharmaceutical sector in Medical Affairs, Clinical Development or related positions.</p>\n<p>Experience in the field of medical support of a product portfolio across multiple therapeutic areas.</p>\n<p>Experience in leading and participating in teams across cultures and geographies, prior experience in global medical launch planning activities.</p>\n<p>Employees can expect to be paid a salary of between $248,000 to $372,000. Additional compensation may include a bonus or commission (if relevant). Additional benefits include health care, vision, dental, retirement, PTO, sick leave, etc..</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd72627b-b6d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976918156","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$248,000 - $372,000","x-skills-required":["Clinical study design","Analysis and interpretation","Observational studies","Health economics/outcomes research","Strategic planning","Clinical research","Team leadership","Communication","Travel"],"x-skills-preferred":["Vascular Neurology","Neurology","Medical Affairs","Clinical Development","Global medical launch planning"],"datePosted":"2026-04-18T22:11:57.818Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Medical","industry":"Pharmaceuticals","skills":"Clinical study design, Analysis and interpretation, Observational studies, Health economics/outcomes research, Strategic planning, Clinical research, Team leadership, Communication, Travel, Vascular Neurology, Neurology, Medical Affairs, Clinical Development, Global medical launch planning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":248000,"maxValue":372000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_37eeb95c-12b"},"title":"Cash Management Specialist","description":"<p>In compliance with applicable laws, HSBC is committed to employing only those who are authorised to work in the US. As a Cash Management Specialist, you will lead Cash Services, focusing on nostro monitoring, reconciliations, adjustments, and funding support. You will monitor foreign currency nostro positions via RTCM (Real Time Cash Management) to ensure accounts are adequately funded, including intraday reconciliation for afternoon currencies (AED, INR, CAD, MXN, TRY). You will review the accuracy of nostro projected balance reconciliations completed by Cash Management Kuala Lumpur, including performing T+1 (trade date plus one day) balance substantiation and sending Start of Day (SOD) communication to the FX Desk.</p>\n<p>You will perform intraday reconciliation for APAC (Asia-Pacific) currencies (AUD, CNY, HKD, JPY, SGD, THB, ZAR) and send End of Day (EOD) communication to the FX Desk. You will review and approve nostro adjustments raised by Cash Management Kuala Lumpur, and report resulting fluctuations to the FX Desk for FX funding. You will ensure accurate USD projections are provided to Markets Treasury for Broker/Dealer funding, including Fixed Income funding requirements to the BSM desk (Balance Sheet Management desk).</p>\n<p>As Team Lead for Cash Services in the Manager&#39;s absence, you will provide guidance on manual payments processing, and serve as 2nd releaser for manual payments. You will review and update end-of-day checklists. You will operate within the regulatory framework for supported Treasury markets and within controls defined by Management, FIM (Functional Instruction Manual), and relevant Operations Instruction manuals. You will make decisions within assigned accountabilities, propose and implement process improvements, and escalate decisions impacting other areas to the immediate manager and relevant department managers.</p>\n<p>Each employee must be aware of the Operational Risk scenario associated with the role and act in a manner that takes account of operational risk considerations. Each employee must ensure compliance, operational risk controls in accordance with HSBC or regulatory standards and policies; and optimise relations with regulators by addressing any issues. Each employee must promote an environment that supports diversity and reflects the HSBC brand.</p>\n<p>Observation of Internal Controls: Each employee must maintain HSBC internal control standards, including timely implementation of internal and external audit points together with any issues raised by external regulators. Any failures to comply with the above should be reflected in year-end performance assessments. Each employee must understand, follow and demonstrate compliance with all relevant internal and external rules, regulations and procedures that apply to the conduct of the business in which the jobholder is involved, specifically Internal Controls and any Compliance policy including, inter alia, the Group Compliance policy.</p>\n<p>As an HSBC employee, you will have access to tailored professional development opportunities to ensure you have the right skills for today and tomorrow. We offer a competitive pay and benefits package including a robust Wellness Hub, all in a welcoming and inclusive work environment. You will be empowered to drive HSBC&#39;s engagement with the communities we serve through an industry-leading volunteerism policy, a generous matching gift program, and a comprehensive program of immersive Sustainability and Climate Change Initiatives. You&#39;ll want to join our Employee Resource Groups as they play a central part in life at HSBC, including the development of our employees and networking inside and outside of HSBC.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_37eeb95c-12b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610387263","x-work-arrangement":"onsite","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Experience in a Treasury Operations environment","Strong attention to detail and understanding of industry best practice, regulatory requirements, and Group standards","Strong communication and organisational skills; able to work under pressure with sound judgement in a fast-paced environment; agile and adaptable","Management of Risk","Observation of Internal Controls"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:09:45.248Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Experience in a Treasury Operations environment, Strong attention to detail and understanding of industry best practice, regulatory requirements, and Group standards, Strong communication and organisational skills; able to work under pressure with sound judgement in a fast-paced environment; agile and adaptable, Management of Risk, Observation of Internal Controls"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6831d5f-7e9"},"title":"Principal AI Ops Architect, GPS","description":"<p><strong>Role Overview</strong></p>\n<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for national LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>As a Principal AI Ops Architect, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.</p>\n<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.</li>\n<li>Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.</li>\n<li>Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.</li>\n<li>Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.</li>\n<li>Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.</li>\n<li>Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.</li>\n<li>Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.</li>\n<li>Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.</li>\n<li>System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.</li>\n<li>Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.</li>\n<li>Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.</li>\n<li>Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.</li>\n<li>Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading AI company</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p><strong>About Us</strong></p>\n<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6831d5f-7e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4671740005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI","Machine Learning","Cloud Computing","Kubernetes","Vector Databases","Agentic Development","LLM Observability Tools","System Architecture","Global Government Security Standards"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:02:05.605Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar; London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, Machine Learning, Cloud Computing, Kubernetes, Vector Databases, Agentic Development, LLM Observability Tools, System Architecture, Global Government Security Standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a42f26c-511"},"title":"Evals Engineer, Applied AI","description":"<p>We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry&#39;s leading GenAI Evaluation Suite.</p>\n<p>As a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise, you will partner with Scale&#39;s Operations team and enterprise customers to translate ambiguity into structured evaluation data. This involves guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.</p>\n<p>Your responsibilities will also include analysing feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments. You will design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems, including creating models that critique, grade, and explain agent outputs.</p>\n<p>To succeed in this role, you will need a strong foundational knowledge of large language models, a passion for tackling complex evaluation challenges, and the ability to thrive in a dynamic, fast-paced research environment. You should be able to think outside the box, stay current with the latest literature in AI evaluation, and be passionate about integrating novel research ideas into our workflows to build best-in-class evaluation systems.</p>\n<p>In addition to your technical expertise, you will need excellent communication and collaboration skills, as you will work closely with cross-functional teams to drive project success.</p>\n<p>If you are a motivated and detail-oriented individual with a passion for AI research and evaluation, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a42f26c-511","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4629589005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","Large Language Models","Generative AI","Machine Learning","Applied Research","Evaluation Infrastructure"],"x-skills-preferred":["Advanced degree in Computer Science, Machine Learning, or a related quantitative field","Published research in leading ML or AI conferences","Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems","Experience collaborating with operations or external teams to define high-quality human annotator guidelines","Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis"],"datePosted":"2026-04-18T16:01:26.736Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, Large Language Models, Generative AI, Machine Learning, Applied Research, Evaluation Infrastructure, Advanced degree in Computer Science, Machine Learning, or a related quantitative field, Published research in leading ML or AI conferences, Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems, Experience collaborating with operations or external teams to define high-quality human annotator guidelines, Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af227030-57a"},"title":"Community Manager - French speaker (contract)","description":"<p>We are searching for a strategic community management and operations expert to join the International Community team and help grow outside of the US. As an integral part of the Product organization, the International Community team plays a key role in helping the team realize its mission: bringing community and belonging to everyone in the world.</p>\n<p>As the Strategic Community Specialist, you will be at the forefront of jumpstarting and fostering local communities. You will be an expert of the French market, proficient in English, and focused on proactive initiatives to build bridges with local volunteer moderators and innovate, iterate, refine, and scale our international consumer playbook.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Gain a deep understanding of the market and specialized knowledge to identify user and moderator pain points, competitors, and market opportunities.</li>\n<li>Cultivate, recruit, educate, and grow a local community of moderators (i.e., the volunteers who are developing, maintaining, and crafting the culture of local communities) and implement a conversion funnel to onboard external creators and local users to turn them into active moderators.</li>\n<li>Be a public face with our local volunteer moderators to build trust and drive engagement in local communities, mediating conflicts and helping drive solutions.</li>\n<li>Iterate, innovate, and scale the consumer growth playbook by recognizing new opportunities, suggesting bold ideas and programs, and persuading internal teams to align on new plans.</li>\n<li>Communicate quantitative and qualitative insights to optimize and improve the international consumer growth playbook.</li>\n<li>Be obsessive about achieving aggressive goals and outcomes.</li>\n<li>Run reports, build dashboards, analyze data, set KPIs, and execute impactful community initiatives: leverage data to uncover community and moderator insights that will influence the direction of your market pod.</li>\n</ul>\n<p>What We Can Expect From You:</p>\n<ul>\n<li>5+ years of community development, project management, growth, operations, or product consulting experience with a track record of community-led projects that have driven business impact.</li>\n<li>Deep knowledge and understanding French culture, including current events, politics, customs, traditions, language, topics of interest, taboos, etc.</li>\n<li>Nuanced comprehension of the digital landscape and the cultural intricacies inherent within it.</li>\n<li>Succinct and persuasive communicator: collaborate effectively across internal teams, local users, moderators, partners, content creators, influencers, etc.</li>\n<li>Results-obsessed: outstanding execution and high attention to detail</li>\n<li>Deep interest in developing and driving projects, framing issues and seeking solutions, testing hypotheses, and iterating on process improvements.</li>\n<li>Proficiency in English and based in the UK or Ireland</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af227030-57a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7669372","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"contract","x-salary-range":null,"x-skills-required":["community development","project management","growth","operations","product consulting","French culture","digital landscape","communication","results-obsessed","deep interest in developing and driving projects"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:40.216Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"community development, project management, growth, operations, product consulting, French culture, digital landscape, communication, results-obsessed, deep interest in developing and driving projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba66dcb1-8d9"},"title":"Research Scientist, AI Controls and Monitoring","description":"<p>We&#39;re seeking a Research Scientist to join our team focused on AI Controls and Monitoring. As a key member of our team, you will design methods, systems, and experiments to ensure that advanced AI models and agents remain aligned with intended goals, even in high-stakes or adversarial environments.</p>\n<p>Your responsibilities will include developing monitoring techniques and observability methods, researching mechanisms for layered control, and designing red-team simulations to probe weaknesses in oversight and control mechanisms.</p>\n<p>To succeed in this role, you&#39;ll need a strong background in machine learning, particularly in generative AI, and at least three years of experience addressing sophisticated ML problems. You should be comfortable designing control and monitoring experiments for AI systems, building prototype systems, and quickly turning new ideas from the research literature into working prototypes.</p>\n<p>In addition to your technical expertise, you&#39;ll need strong written and verbal communication skills to operate in a cross-functional team.</p>\n<p>This role offers a competitive salary range of $216,000-$270,000 USD, depending on location and experience, as well as equity-based compensation and benefits, including comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba66dcb1-8d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4675694005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Machine Learning","Generative AI","AI Control Protocols","AI Risk Evaluations","Runtime Monitoring","Anomaly Detection","Observability"],"x-skills-preferred":["Post-Training and RL Techniques","Scalable Oversight","Interpretability","Debate"],"datePosted":"2026-04-18T15:58:38.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Generative AI, AI Control Protocols, AI Risk Evaluations, Runtime Monitoring, Anomaly Detection, Observability, Post-Training and RL Techniques, Scalable Oversight, Interpretability, Debate","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ef13d5b-161"},"title":"Senior Product Manager - Platform","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>\n<p>As Mixpanel scales into the enterprise, the platform that underpins how customers manage, secure, and control their Mixpanel environment becomes increasingly strategic. This isn’t a role about a single feature or workflow , it’s about the foundational systems that determine whether a 500-person company trusts us with their data and their organisational complexity.</p>\n<p>Responsibilities</p>\n<p>Own the Enterprise Platform Roadmap</p>\n<p>Define and drive the roadmap for Mixpanel’s enterprise management surface , RBAC, SSO/passkeys, auth services, audit logs, admin controls, bulk user management, and settings UX</p>\n<p>Develop a principled view of what the platform must guarantee (security, compliance, reliability) vs. what can be iterative , and sequence investments accordingly</p>\n<p>Anticipate requirements from enterprise customers, internal security reviews, and compliance obligations; design ahead of the ask rather than reacting to it</p>\n<p>Build business cases for platform investments, making the indirect ROI of security and compliance work legible to leadership</p>\n<p>Establish operating cadence and governance across the two engineering tracks: align dependencies, create clear owners, and ensure predictable delivery</p>\n<p>Lead with Curiosity and Disciplined Discovery</p>\n<p>Approach every platform problem with structured curiosity: frame clear hypotheses, test assumptions with lightweight experiments, and evaluate second-order effects before committing</p>\n<p>Stay directly connected to enterprise customers and security-facing stakeholders (Sales, CS, Legal) to understand how platform limitations show up in real deals and renewals</p>\n<p>Synthesise signal from security reviews, compliance inquiries, support tickets, and customer interviews into clear, prioritised product proposals</p>\n<p>Drive Cross-Team Platform Integrity</p>\n<p>Own the consistency and reliability of shared authentication, permissions, and admin patterns across Mixpanel’s product portfolio , ensure every product team builds on a stable, well-governed foundation</p>\n<p>Partner with engineering leads to define clear APIs, auth contracts, and permission interfaces that prevent hidden dependencies and enable safe evolution</p>\n<p>Define and maintain measurement standards for platform health: uptime, auth failure rates, admin task completion, enterprise onboarding velocity</p>\n<p>Work closely with application security to ensure platform features meet enterprise security standards and pass external audits</p>\n<p>We’re Looking For</p>\n<p>Experience</p>\n<p>5+ years of PM experience owning meaningful problems end-to-end; direct experience with identity, access management, enterprise administration, or security-adjacent product work strongly preferred</p>\n<p>Track record of driving alignment and delivering outcomes across engineering, security, and GTM functions without formal authority</p>\n<p>Experience building business cases for security, compliance, or infrastructure investments where ROI is indirect or realised over long time horizons</p>\n<p>Familiarity with enterprise sales and the role platform features (SSO, RBAC, audit logs) play in closing and renewing enterprise accounts</p>\n<p>Skills</p>\n<p>Systems thinker: you understand how platform-layer decisions ripple across products, user journeys, and engineering teams, and you reason about second-order effects before committing to a direction</p>\n<p>Technical fluency: credible with engineers on identity, auth, and security infrastructure tradeoffs; can partner on architecture-level product decisions without needing to be an engineer</p>\n<p>Curious by default: you ask better questions before proposing solutions, challenge assumptions respectfully, and design lightweight experiments to validate direction</p>\n<p>Customer-obsessed even on infrastructure: you stay connected to how platform quality shows up for end users and enterprise admins, not just internal teams</p>\n<p>Clear, direct communicator: you can write a crisp one-pager, align senior stakeholders on a difficult tradeoff, and make security/compliance topics accessible to non-technical audiences</p>\n<p>AI-native: you use AI tools actively in your day-to-day product work , research, synthesis, discovery , and can point to specific ways they’ve changed how you operate</p>\n<p>Bonus Points</p>\n<p>Deep familiarity with SSO protocols (SAML, OIDC), passkeys/WebAuthn, or RBAC design patterns</p>\n<p>Experience navigating SOC 2, GDPR, or enterprise security review processes as a PM</p>\n<p>Hands-on Mixpanel user who understands the product from the inside</p>\n<p>Experience at a company navigating single-product to multiproduct platform complexity</p>\n<p>Experience writing production-grade software or working directly with auth/identity APIs</p>\n<p>Compensation</p>\n<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>\n<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors. If you have questions about the specific range, your recruiter can share this information.</p>\n<p>Mixpanel Compensation Range $218,500-$250,000 USD</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ef13d5b-161","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7779504","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$218,500-$250,000 USD","x-skills-required":["Systems thinker","Technical fluency","Curious by default","Customer-obsessed","Clear, direct communicator","AI-native"],"x-skills-preferred":["Deep familiarity with SSO protocols (SAML, OIDC), passkeys/WebAuthn, or RBAC design patterns","Experience navigating SOC 2, GDPR, or enterprise security review processes as a PM","Hands-on Mixpanel user who understands the product from the inside","Experience at a company navigating single-product to multiproduct platform complexity","Experience writing production-grade software or working directly with auth/identity APIs"],"datePosted":"2026-04-18T15:58:26.769Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, US (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Systems thinker, Technical fluency, Curious by default, Customer-obsessed, Clear, direct communicator, AI-native, Deep familiarity with SSO protocols (SAML, OIDC), passkeys/WebAuthn, or RBAC design patterns, Experience navigating SOC 2, GDPR, or enterprise security review processes as a PM, Hands-on Mixpanel user who understands the product from the inside, Experience at a company navigating single-product to multiproduct platform complexity, Experience writing production-grade software or working directly with auth/identity APIs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":218500,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_262aa1cb-01c"},"title":"Head of Corporate Engineering","description":"<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>\n<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>\n<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>\n<p>Responsibilities:</p>\n<p>In this influential role, you will be responsible for:</p>\n<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>\n<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>\n<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>\n<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>\n<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>\n<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>\n<p>Desired Success Outcomes</p>\n<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>\n<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>\n<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>\n<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>\n<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>\n<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>\n<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>\n<p>Qualifications:</p>\n<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>\n<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>\n<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>\n<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>\n<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>\n<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>\n<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $265,000-$364,300 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_262aa1cb-01c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7293607002","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$265,000-$364,300 USD","x-skills-required":["Cloud infrastructure","Identity and Access Management","Endpoint security","Collaboration tools","DevOps","Site Reliability Engineering","Agile development","Infrastructure as Code","Observability","Automation","Scripting languages","Cloud native technologies","Public cloud implementations","AWS","GCP","Azure"],"x-skills-preferred":["Jira","Confluence","GitHub","Atlassian","Moveworks","Glean","Slack","Gmail","Microsoft Office"],"datePosted":"2026-04-18T15:58:26.589Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":265000,"maxValue":364300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3a031f82-087"},"title":"Senior Engineering Manager - Team Billing","description":"<p>Join Intercom as a Senior Engineering Manager to lead Team Billing , the group that owns the core systems powering how customers buy Intercom, how we meter usage, bill accurately, recognise revenue, and keep money flowing.</p>\n<p>This is a high-impact, high-visibility role at the heart of Growth Engineering, partnering closely with Pricing &amp; Packaging, Order, Self-Serve, Sales, Finance, Enterprise Systems, Analytics, and Billing Operations.</p>\n<p>You&#39;ll drive clarity, focus, and execution across a mission-critical platform while fostering a culture of ownership, accountability, and incredibly high standards.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading and scaling a team building and enhancing Intercom&#39;s billing, subscription, invoicing, and metering capabilities , the backbone of revenue and monetisation experiences.</li>\n</ul>\n<ul>\n<li>Owning and improving the reliability and operational excellence of Billing systems: on-call quality, incident response, observability, and product health standards.</li>\n</ul>\n<ul>\n<li>Partnering with Senior and Staff Engineers to shape and execute the technical strategy across subscription management, metering, invoicing, and integrations (e.g., Stripe Billing) , balancing near-term needs with long-term platform evolution.</li>\n</ul>\n<ul>\n<li>Collaborating deeply with Product, Design, Sales, Finance, Enterprise Systems, Analytics, and Billing Ops to prioritise the roadmap, close operational gaps, and deliver measurable business outcomes each quarter.</li>\n</ul>\n<ul>\n<li>Cultivating and managing trusted relationships with partner leaders in Finance, Enterprise Systems, and Analytics to ensure alignment on data flows, compliance, reconciliation, and business reporting needs.</li>\n</ul>\n<ul>\n<li>Driving migrations and modernisation work where needed (e.g., Stripe-first capabilities, systems parity and improvements), ensuring safe change management and robust downstream data flows.</li>\n</ul>\n<ul>\n<li>Bringing clarity and alignment to priorities, tradeoffs, and timelines; setting a high bar for planning, testing, and end-to-end quality, especially for revenue-impacting launches.</li>\n</ul>\n<ul>\n<li>Developing and retaining top talent through coaching, clear expectations, and effective delegation; scaling yourself via strong tech-lead partnerships and “coach-and-delegate” leadership.</li>\n</ul>\n<ul>\n<li>Promoting a culture of ownership, accountability, and incredibly high standards , moving quickly, communicating crisply, and celebrating wins.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years managing software engineering teams building and shipping customer-facing or revenue-impacting systems (e.g., billing, payments, pricing, subscriptions).</li>\n</ul>\n<ul>\n<li>Strong technical leadership: comfortable diving into architecture, debugging complex systems, reviewing designs, and making pragmatic tradeoffs to ship safely and fast.</li>\n</ul>\n<ul>\n<li>Proven ability to lead a cross-functional, full-stack team through planning and delivery with clear ownership of business outcomes and product health metrics.</li>\n</ul>\n<ul>\n<li>Excellent product sense and customer empathy , you translate ambiguous requirements into clear scopes, milestones, and measurable impact.</li>\n</ul>\n<ul>\n<li>Skilled communicator who drives alignment across partner teams (Sales, Finance, Enterprise Systems, Analytics, Product, Design) and keeps stakeholders informed and unblocked.</li>\n</ul>\n<ul>\n<li>Track record of cultivating and managing senior relationships across Finance, Enterprise Systems, and Analytics to land durable solutions and accurate downstream reporting.</li>\n</ul>\n<ul>\n<li>Relentless about outcomes , you identify the highest-leverage problems, remove roadblocks, and hold the bar on quality without slipping schedules.</li>\n</ul>\n<ul>\n<li>AI-first mindset with a high bar for excellence: fluent in using AI tools to accelerate planning, execution, quality, and communication , and to inspire adoption across the team.</li>\n</ul>\n<p>Bonus skills and attributes include:</p>\n<ul>\n<li>Experience with Stripe Billing or similar billing/subscription platforms; familiarity with usage metering and invoicing pipelines.</li>\n</ul>\n<ul>\n<li>Background in customer-facing SaaS and/or scale-up environments operating at high velocity.</li>\n</ul>\n<ul>\n<li>Experience leading operational excellence initiatives (on-call, incidents, observability) for revenue-critical systems.</li>\n</ul>\n<p>Success looks like:</p>\n<ul>\n<li>30 days: You understand the domain, architecture, on-call posture, partner landscape, and active initiatives; you&#39;ve built trust with engineers and partner leads and stabilised any urgent operational issues.</li>\n</ul>\n<ul>\n<li>60 days: Clear, realistic roadmap and execution plan across platform, product health, and partner asks; visible improvements in on-call rigor and update cadence; risks surfaced with mitigation plans co-owned with Finance, Enterprise Systems, and Analytics.</li>\n</ul>\n<ul>\n<li>90 days: Shipped at least one meaningful capability or reliability improvement; measurable progress on top business outcomes; staffing/skills gaps identified with a concrete plan to address them.</li>\n</ul>\n<p>Team and domain:</p>\n<p>Mission: Bill customers reliably and accurately, and provide core monetisation and operational capabilities for Intercom.</p>\n<p>Partners: Pricing &amp; Packaging, Order, Self-Serve, Sales, Finance, Enterprise Systems, Analytics, Billing Ops, Data/Analytics.</p>\n<p>Key areas: Subscription management, usage metering (e.g., Fin features), invoicing, revenue recognition data flows, admin tools, and ops workflows.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3a031f82-087","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7610471","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["5+ years managing software engineering teams building and shipping customer-facing or revenue-impacting systems","Strong technical leadership","Proven ability to lead a cross-functional, full-stack team through planning and delivery","Excellent product sense and customer empathy","Skilled communicator"],"x-skills-preferred":["Experience with Stripe Billing or similar billing/subscription platforms","Background in customer-facing SaaS and/or scale-up environments operating at high velocity","Experience leading operational excellence initiatives (on-call, incidents, observability)"],"datePosted":"2026-04-18T15:58:23.746Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"5+ years managing software engineering teams building and shipping customer-facing or revenue-impacting systems, Strong technical leadership, Proven ability to lead a cross-functional, full-stack team through planning and delivery, Excellent product sense and customer empathy, Skilled communicator, Experience with Stripe Billing or similar billing/subscription platforms, Background in customer-facing SaaS and/or scale-up environments operating at high velocity, Experience leading operational excellence initiatives (on-call, incidents, observability)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ded9d7ff-8aa"},"title":"Senior Engineering Manager, Data Streaming Services (Auth0)","description":"<p>Secure Every Identity, from AI to Human\\n\\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\\n\\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\\n\\n<strong>Key Responsibilities:</strong>\\n\\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\\n\\n<strong>Requirements:</strong>\\n\\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\\n\\n<strong>Bonus Points:</strong>\\n\\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\\n<em> Experience with databases such as PostgreSQL and MongoDB.\\n</em> Experience with distributed streaming platforms like Kafka.\\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ded9d7ff-8aa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Auth0","sameAs":"https://auth0.com/","logo":"https://logos.yubhub.co/auth0.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7719329","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$207,000-$284,000 USD","x-skills-required":["engineering leadership","technical and architectural acumen","project management skills","collaborative leadership style","data-intensive applications","databases","distributed streaming platforms","IAM domain","cloud providers","container technologies","observability tools"],"x-skills-preferred":["go","node.js","Java","PostgreSQL","MongoDB","Kafka","AWS","Azure","Kubernetes","Docker","Datadog"],"datePosted":"2026-04-18T15:58:08.018Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago, Illinois; New York, New York; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":207000,"maxValue":284000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5717691a-508"},"title":"Staff Infrastructure Software Engineer, Enterprise AI","description":"<p>We are looking for a Staff Infrastructure Software Engineer to act as a primary technical lead, engineering the &#39;paved road&#39; for our knowledge retrieval and inference engines. You will define the deployment standards for Agentic workflows at scale, bridging the gap between complex AI orchestration and world-class infrastructure.</p>\n<p>The ideal candidate thrives in a fast-paced environment, has a passion for both deep technical work and mentoring, and is capable of setting a long-term technical strategy for a critical domain while maintaining a strong, hands-on delivery focus.</p>\n<p>You will architect and implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers.</li>\n<li>Using our own data and AI platform to analyse build and test logs and metrics to identify areas for improvement.</li>\n<li>Defining the architectural patterns for our multi-cloud infrastructure to support secure, reliable, and scalable Agentic workflows for enterprise customers.</li>\n<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>\n<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>\n<li>Designing and championing highly scalable, reliable, and low-latency infrastructure and frameworks for building, orchestrating, and evaluating multi-agent systems at enterprise scale.</li>\n<li>Leading the infrastructure roadmap with a strong focus on compliance, privacy, and security standards, including designing change management and data isolation strategies.</li>\n<li>Owning the development and maintenance of our best-in-class Agentic observability platform (logging, metrics, tracing, and analytics) to proactively ensure system health and enable rapid incident response.</li>\n<li>Driving developer efficiency by building automated tooling and championing Infrastructure-as-Code (IaC) paradigms throughout the engineering organization to improve workflows and operational efficiency.</li>\n</ul>\n<p>The ideal candidate has proven experience in a senior role, with 5+ years of full-time software engineering experience, and a deep understanding of modern infrastructure practices, including CI/CD, IaC (e.g., Terraform, Helm Charts), container orchestration (e.g., Kubernetes) and observability platforms (e.g., Datadog, Prometheus, Grafana).</p>\n<p>Extensive experience with at least one major cloud provider (AWS, Azure, or GCP) and strong knowledge of security and compliance in enterprise environments, with a focus on access management, data isolation, and customer-specific VPC setups is required.</p>\n<p>Proficiency in Python or JavaScript/TypeScript, and SQL is also necessary.</p>\n<p>Bonus points for hands-on experience and a passion for working with Agents, LLMs, vector databases, and other emerging AI technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5717691a-508","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4599700005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,200-$310,500 USD","x-skills-required":["Cloud computing","Infrastructure as Code","Container orchestration","Observability platforms","Security and compliance","Access management","Data isolation","Customer-specific VPC setups","Python","JavaScript/TypeScript","SQL"],"x-skills-preferred":["Agents","LLMs","Vector databases","Emerging AI technologies"],"datePosted":"2026-04-18T15:58:05.354Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing, Infrastructure as Code, Container orchestration, Observability platforms, Security and compliance, Access management, Data isolation, Customer-specific VPC setups, Python, JavaScript/TypeScript, SQL, Agents, LLMs, Vector databases, Emerging AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216200,"maxValue":310500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e58d4e9e-165"},"title":"Account Executive","description":"<p>We are looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion within strategic Enterprise accounts. You will be the owner of a defined territory where you will build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>As an Enterprise Account Executive, you will be responsible for developing and executing a proactive outbound cadence that generates ≥50% of your booked opportunities. You will uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals. You will craft and deliver tailored narratives and live demos that map Elastic&#39;s Search, Observability, and Security capabilities to measurable business outcomes.</p>\n<p>You will collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90% forecast accuracy within ±10%. You will lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments. You will position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</p>\n<p>You will work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e58d4e9e-165","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7505982","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SaaS quota-carrying success","Expert discovery and qualification skills","Compelling value storytelling","Strong negotiation chops","Technical and cloud fluency"],"x-skills-preferred":["Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:58:00.452Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"SaaS quota-carrying success, Expert discovery and qualification skills, Compelling value storytelling, Strong negotiation chops, Technical and cloud fluency, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9c235fca-4e3"},"title":"Senior/Staff Machine Learning Engineer, General Agents, Enterprise GenAI","description":"<p>As a Senior/Staff Machine Learning Engineer on the General Agents team, you&#39;ll play a critical role in designing, building, and deploying production-ready AI agents that solve high-impact enterprise problems.</p>\n<p>You will work across the full agent lifecycle,from model and system design to evaluation, deployment, and iteration,bridging cutting-edge agentic techniques with the constraints and requirements of real customer environments.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design and implement end-to-end agent systems that combine LLM reasoning, tool use, memory, and control logic to solve recurring enterprise use cases.</li>\n<li>Build scalable, reliable agent architectures that can be deployed across many customers with varying data, tools, and constraints.</li>\n<li>Develop evaluation frameworks, datasets, environments, and metrics to measure agent performance, reliability, and business impact in production settings.</li>\n<li>Collaborate closely with product managers, customers, data annotators, and other engineering teams to translate enterprise requirements into robust agent designs.</li>\n<li>Productionize frontier agent techniques (e.g., planning, multi-step reasoning and tool-use, multi-agent patterns) into maintainable, observable systems.</li>\n<li>Own deployment, monitoring, and iteration of agent systems, including failure analysis and continuous improvement based on real-world usage.</li>\n<li>Contribute to technical direction and architectural decisions for general agent development best practices and methods, with increasing scope and leadership at the Staff level.</li>\n</ul>\n<p>Ideal candidates will have:</p>\n<ul>\n<li>5+ years of experience building and deploying machine learning or AI systems for real-world, production use cases.</li>\n<li>Strong engineering fundamentals, supported by a Bachelor’s and/or Master’s degree in Computer Science, Machine Learning, AI, or equivalent practical experience.</li>\n<li>Deep understanding of modern LLMs, prompt-, context-, and system-level optimization, and agentic system design.</li>\n<li>Proven proficiency in Python, including writing production-quality, testable, and maintainable code.</li>\n<li>Experience building systems that integrate models with external tools, APIs, databases, and services.</li>\n<li>Ability to operate in ambiguous problem spaces, balancing research-driven approaches with pragmatic product constraints.</li>\n<li>Strong communication skills and comfort working in customer-facing or cross-functional environments.</li>\n</ul>\n<p>Nice-to-haves include hands-on experience building AI agents using modern generative AI stacks, experience with agent frameworks, orchestration layers, or workflow systems, familiarity with evaluation, monitoring, and observability for LLM-powered systems in production, and experience deploying ML systems in cloud environments and operating them at scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9c235fca-4e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4658162005","x-work-arrangement":"hybrid","x-experience-level":"senior|staff","x-job-type":"full-time","x-salary-range":"$264,800-$331,000 USD","x-skills-required":["Machine Learning","Artificial Intelligence","Python","LLMs","Agentic System Design"],"x-skills-preferred":["Generative AI Stacks","Agent Frameworks","Orchestration Layers","Workflow Systems","Evaluation, Monitoring, and Observability"],"datePosted":"2026-04-18T15:57:55.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Artificial Intelligence, Python, LLMs, Agentic System Design, Generative AI Stacks, Agent Frameworks, Orchestration Layers, Workflow Systems, Evaluation, Monitoring, and Observability","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264800,"maxValue":331000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_70e2591f-d7d"},"title":"Technical Program Manager, Infrastructure","description":"<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>\n<p>Developer Productivity &amp; Tooling</p>\n<ul>\n<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>\n</ul>\n<ul>\n<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>\n</ul>\n<ul>\n<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>\n</ul>\n<ul>\n<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>\n</ul>\n<p>Infrastructure Reliability &amp; Operations</p>\n<ul>\n<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>\n</ul>\n<ul>\n<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>\n</ul>\n<ul>\n<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>\n</ul>\n<p>Cross-functional Coordination</p>\n<ul>\n<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>\n</ul>\n<ul>\n<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>\n</ul>\n<ul>\n<li>Drive alignment on priorities and timelines across teams with competing constraints</li>\n</ul>\n<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_70e2591f-d7d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5111783008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$290,000-$365,000 USD","x-skills-required":["Kubernetes","Cloud platforms (AWS, GCP, Azure)","ML infrastructure (GPU/TPU/Trainium clusters)","Developer productivity initiatives","CI/CD systems","Infrastructure scaling"],"x-skills-preferred":["Observability tooling and practices","AI tools to improve engineering productivity","Research teams and translating their needs into concrete technical requirements"],"datePosted":"2026-04-18T15:57:52.097Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":290000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f1950023-ef7"},"title":"Senior Engineering Manager, Activation","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Engineering</p>\n<p>Engineering at Brex is about building systems that scale with speed and intention. Our teams span Software, Data, Security, and IT, and operate with high autonomy and deep collaboration. We tackle hard technical problems, own our outcomes, and push for excellence at every level , from architecture to deployment. It’s an environment where engineering is a craft, and builders become leaders.</p>\n<p>What you’ll do</p>\n<p>You will lead an engineering group focused on building the systems and product experiences that power customer activation at Brex, including onboarding, account setup, verifications, integrations, and implementation workflows that help customers realize value quickly. This role requires strategic thinking, operational excellence, technical leadership, and a deep passion for delivering frictionless, AI-enhanced customer journeys.</p>\n<p>The ideal candidate is a seasoned engineering leader with experience scaling user-facing onboarding systems, delivering high-quality product experiences, and partnering deeply across Product, Design, Operations, and GTM teams.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our New York office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<ul>\n<li>Take an active role in driving business and product strategies, championing a seamless, intuitive, and efficient onboarding and implementation experience.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional partners across Product, Design, Operations, and Sales to define priorities and deliver delightful customer activation experiences.</li>\n</ul>\n<ul>\n<li>Leverage AI to reimagine and automate onboarding and implementation workflows, improving speed, personalization, and operational leverage.</li>\n</ul>\n<ul>\n<li>Drive execution of the Activation roadmap, ensuring timely, high-quality delivery of systems and features that help customers activate and realize value.</li>\n</ul>\n<ul>\n<li>Lead and manage multiple teams of engineers, including hiring, mentoring, performance management, and establishing strong technical direction.</li>\n</ul>\n<ul>\n<li>Build systems that integrate identity verification, KYC and compliance workflows, customer data ingestion, and implementation tooling in a scalable and reliable manner.</li>\n</ul>\n<ul>\n<li>Drive continuous improvement in engineering processes, technical architecture, and product quality.</li>\n</ul>\n<ul>\n<li>Foster a culture of innovation, collaboration, accountability, and customer obsession across the team.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>Strong technical background and understanding of software development principles.</li>\n</ul>\n<ul>\n<li>Expertise leading full-stack engineering teams delivering end-to-end product experiences.</li>\n</ul>\n<ul>\n<li>Demonstrated track record of shipping customer-facing features across multiple release cycles.</li>\n</ul>\n<ul>\n<li>3+ years of experience managing or leading multiple technical teams in a high-growth environment.</li>\n</ul>\n<ul>\n<li>Regularly works with cross-functional partners (e.g. Product, Design, Operations, Sales) and excels in driving alignment across stakeholders.</li>\n</ul>\n<ul>\n<li>Experience building systems related to onboarding, implementation, identity, workflow automation, customer lifecycle products, or other customer facing experiences.</li>\n</ul>\n<ul>\n<li>Data-driven mindset with the ability to evaluate impact, measure funnel performance, and optimize activation metrics.</li>\n</ul>\n<ul>\n<li>Track record building AI-powered product experiences, including LLM-driven automation and personalization.</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Experience with data platforms such as Snowflake, Hex, or similar.</li>\n</ul>\n<ul>\n<li>You have started your own technology venture or were an early technical founder/employee. We value entrepreneurial spirit &amp; scrappiness!</li>\n</ul>\n<ul>\n<li>You are a champion for the customer and constantly put yourself in their shoes to create intuitive, frictionless experiences.</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $300,000 - $375,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f1950023-ef7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8330492002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000 - $375,000","x-skills-required":["Technical leadership","Software development principles","Full-stack engineering","Customer-facing features","Data-driven mindset","AI-powered product experiences","LLM-driven automation","Personalization"],"x-skills-preferred":["Data platforms","Snowflake","Hex","Entrepreneurial spirit","Scrappiness","Customer obsession"],"datePosted":"2026-04-18T15:57:39.757Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical leadership, Software development principles, Full-stack engineering, Customer-facing features, Data-driven mindset, AI-powered product experiences, LLM-driven automation, Personalization, Data platforms, Snowflake, Hex, Entrepreneurial spirit, Scrappiness, Customer obsession","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":375000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95c49f85-a98"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>\n</ul>\n<ul>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n</ul>\n<ul>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n</ul>\n<ul>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n</ul>\n<ul>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n</ul>\n<ul>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n</ul>\n<ul>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n</ul>\n<ul>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n</ul>\n<ul>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n</ul>\n<ul>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n</ul>\n<ul>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n</ul>\n<ul>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n</ul>\n<ul>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n</ul>\n<ul>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95c49f85-a98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5102440008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["observability","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-04-18T15:57:27.177Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f4cd384f-6ed"},"title":"Senior Software Engineer, Release Engineering","description":"<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>\n<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>\n<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>\n<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>\n<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>\n<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f4cd384f-6ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$176,000 - $237,600","x-skills-required":["Go","Java","Concurrency","Distributed Systems","Multi-threaded Programming","Backend Systems","Tooling","Infrastructure","Developer Workflows","Release Automation","CI/CD Pipelines","Build Tools","Deployment Orchestration","Cloud Environments","Container Tooling","Distributed Systems Orchestration","Observability Tooling","Platform Engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:07.513Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176000,"maxValue":237600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a2ea62c-943"},"title":"Research Engineer, Infrastructure, RL Systems","description":"<p>We&#39;re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models through reinforcement learning.</p>\n<p>This role sits at the intersection of research and large-scale systems engineering: a builder who understands both the algorithms behind RL and the realities of distributed training and inference at scale. You&#39;ll wear many hats, from optimising rollout and reward pipelines to enhancing reliability, observability, and orchestration, collaborating closely with researchers and infra teams to make reinforcement learning stable, fast, and production-ready.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and optimise the infrastructure that powers large-scale reinforcement learning and post-training workloads.</li>\n</ul>\n<ul>\n<li>Improve the reliability and scalability of RL training pipeline, distributed RL workloads, and training throughput.</li>\n</ul>\n<ul>\n<li>Develop shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility for RL systems.</li>\n</ul>\n<ul>\n<li>Collaborate with researchers to translate algorithmic ideas into production-grade training pipelines.</li>\n</ul>\n<ul>\n<li>Build evaluation and benchmarking infrastructure that measures model progress on helpfulness, safety, and factuality.</li>\n</ul>\n<ul>\n<li>Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.</li>\n</ul>\n<p>We&#39;re looking for someone with strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases. You should have a good understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</p>\n<p>Experience training or supporting large-scale language models with tens of billions of parameters or more is a plus. Familiarity with monitoring and observability tools (Prometheus, Grafana, OpenTelemetry) is also a plus.</p>\n<p>Logistics:</p>\n<ul>\n<li>Location: This role is based in San Francisco, California.</li>\n</ul>\n<ul>\n<li>Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We sponsor visas. While we can&#39;t guarantee success for every candidate or role, if you&#39;re the right fit, we&#39;re committed to working through the visa process together.</li>\n</ul>\n<ul>\n<li>Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a2ea62c-943","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachineslab.com/","logo":"https://logos.yubhub.co/thinkingmachineslab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013930008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["deep learning frameworks","PyTorch","JAX","complex codebases","scalable AI infrastructure","large-scale language models","monitoring and observability tools"],"x-skills-preferred":["experience training or supporting large-scale language models","familiarity with monitoring and observability tools"],"datePosted":"2026-04-18T15:56:59.642Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"deep learning frameworks, PyTorch, JAX, complex codebases, scalable AI infrastructure, large-scale language models, monitoring and observability tools, experience training or supporting large-scale language models, familiarity with monitoring and observability tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eea9dc46-6f0"},"title":"Sr. Product Designer, Enterprise Platform","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. The Enterprise platform UX team is responsible for driving the user experience across critical enterprise concerns, specifically focusing on governance, security, and platform observability.</p>\n<p>This includes ensuring that platform admins have intuitive controls for managing access and compliance, maintaining a secure and trustworthy environment for their data and AI operations, and providing clear, actionable insights into platform health and usage.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Help design best-in-class monitoring and governance experiences at scale.</li>\n<li>Design unique interactions that will help administrators understand vast amounts of telemetry data.</li>\n<li>Explore new ways in which Generative AI can enhance the platform experience and developer workflows.</li>\n<li>Develop a deep understanding of Databricks business objectives, the cloud and big data space, its users, and competition.</li>\n<li>Conduct user research to identify customer needs and pain points related to data science and platform usage.</li>\n</ul>\n<p>We are looking for a Senior Product Designer with 5+ years of experience in product design work, a bachelor&#39;s degree or equivalent, and a strong portfolio showcasing the end-to-end design process.</p>\n<p>Key responsibilities include leading large and complex design projects, balancing the needs of diverse stakeholders, and executing beautiful visual and interaction work that&#39;s rooted in a data-driven and well-researched UX process.</p>\n<p>Nice-to-have skills include experience in data visualization, analytics dashboards, or designing interfaces for complex data monitoring, as well as coding skills in React, SQL, CSS, and/or Python.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eea9dc46-6f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8476061002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$144,800-$199,100 USD","x-skills-required":["Product design","User experience","Generative AI","Cloud and big data space","Data science","Platform observability","Governance and security","Telemetry data","User research"],"x-skills-preferred":["Data visualization","Analytics dashboards","React","SQL","CSS","Python"],"datePosted":"2026-04-18T15:56:51.370Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Design","industry":"Technology","skills":"Product design, User experience, Generative AI, Cloud and big data space, Data science, Platform observability, Governance and security, Telemetry data, User research, Data visualization, Analytics dashboards, React, SQL, CSS, Python","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":144800,"maxValue":199100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f706224-663"},"title":"Specialist Solutions Architect - Cloud Infrastructure & Security","description":"<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>\n<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>\n<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>\n</ul>\n<ul>\n<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>\n</ul>\n<ul>\n<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>\n</ul>\n<ul>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>\n</ul>\n<ul>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n</ul>\n<ul>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>\n</ul>\n<ul>\n<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>\n</ul>\n<ul>\n<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>\n</ul>\n<ul>\n<li>Networking: Architecture design, implementation, and performance</li>\n</ul>\n<ul>\n<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>\n</ul>\n<ul>\n<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>\n</ul>\n<ul>\n<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>\n</ul>\n<ul>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<ul>\n<li>Security - understanding how to secure data platforms and manage identities</li>\n</ul>\n<ul>\n<li>Complex deployments</li>\n</ul>\n<ul>\n<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>\n</ul>\n<ul>\n<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>\n</ul>\n<ul>\n<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>\n</ul>\n<ul>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n</ul>\n<ul>\n<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>Zone 2 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 3 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 4 Pay Range $264,000-$363,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f706224-663","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477197002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$264,000-$363,000 USD","x-skills-required":["Cloud Platforms & Architecture","Security","Networking","Platform Administration","Infrastructure Automation and InfraOps","Big Data technologies","Cloud Native Architecture","Serverless Architecture","Gen AI & Model Security","Encryption","Vulnerability Management","Compliance","SCIM","OAuth","SAML","Federation","High availability and disaster recovery","Cluster management","Observability","Logging","Monitoring","Audit","Cost management","Terraform"],"x-skills-preferred":["Python","Java","Scala","SQL","Terraform experience"],"datePosted":"2026-04-18T15:56:46.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Central - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Platforms & Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI & Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264000,"maxValue":363000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ed46937-df6"},"title":"Staff Developer Success Engineer - West","description":"<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>\n<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>\n<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>\n<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>\n<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>\n<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>\n<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ed46937-df6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$170,000 - $215,000","x-skills-required":["cloud-native infrastructure","container orchestration","networking","observability","infrastructure automation","Terraform","IaC","Kubernetes","AWS","GCP","Python","Java","Go","Grafana","Prometheus"],"x-skills-preferred":["security certificate management","security implementation","use case analysis","Temporal design decisions","architecture best practices","EKS","GKE","OpenTracing","Ansible","CDK"],"datePosted":"2026-04-18T15:56:34.606Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":215000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ed4bd662-c67"},"title":"Senior Solutions Architect, Commercial - San Francisco","description":"<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>\n<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>\n<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>\n<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>\n<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>\n<p>Must reside in San Francisco, CA</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ed4bd662-c67","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,000 - $250,000 OTE","x-skills-required":["Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python)","Deep understanding of distributed systems (reliability, observability, and fault tolerance)","Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers","Exceptional time management and prioritization skills with the ability to thrive in high-volume environments","Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration"],"x-skills-preferred":["Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar)","Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.)","Contributions to developer tooling, open source projects, or technical content","Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams","Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)"],"datePosted":"2026-04-18T15:56:33.427Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_78ae8204-779"},"title":"Senior Staff Software Engineer, Solana Staking Protocol","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We&#39;re seeking a Senior Staff Software Engineer to serve as Coinbase&#39;s Solana Staking Protocol CTO , the definitive technical authority on all things Solana staking across the company.</p>\n<p>This is not a typical engineering role. You will combine deep Solana protocol mastery with strategic technical leadership to shape Coinbase&#39;s Solana staking trajectory for years to come.</p>\n<p>You will own the technical strategy across validator operations, staking integrations, and protocol evolution , partnering directly with engineering leadership, product teams, and external ecosystem players including the Solana Foundation.</p>\n<p>You will represent Coinbase on the world stage as a recognized Solana expert, speaking at conferences, engaging with the validator community, and influencing protocol direction.</p>\n<p>Internally, you will be the go-to expert for any Solana staking technical decision, from runtime-level optimizations to cross-product integration strategy.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Define Solana Staking Strategy</strong></p>\n<p>Own and drive Coinbase&#39;s multi-year technical strategy for Solana staking across validator performance, protocol participation, and product integration.</p>\n<p>Connect engineering decisions to business outcomes including yield optimization, cost efficiency, and customer growth.</p>\n<p><strong>Maximize Validator Performance</strong></p>\n<p>Lead the engineering effort to achieve industry-leading APY through validator optimization , including vote accuracy, block production, MEV strategies, commission tuning, and stake distribution.</p>\n<p>Build systems and tooling that give Coinbase a durable performance edge.</p>\n<p><strong>Own Protocol Expertise</strong></p>\n<p>Serve as Coinbase&#39;s foremost authority on the Solana runtime, consensus mechanism, staking economics, and validator client landscape (Agave, Firedancer, etc.).</p>\n<p>Evaluate protocol upgrades (e.g., SIMD proposals), assess risks, and proactively position Coinbase for changes before they land.</p>\n<p><strong>Drive Cross-Product Integration</strong></p>\n<p>Partner with Retail Staking and Institutional Staking product and engineering teams to architect scalable staking integrations across Coinbase&#39;s product surface area.</p>\n<p>Ensure Solana staking is deeply embedded and differentiated in every Coinbase staking product.</p>\n<p><strong>Build External Presence &amp; Influence</strong></p>\n<p>Represent Coinbase in the Solana ecosystem.</p>\n<p>Maintain deep relationships with the Solana Foundation, core development teams, other major validators, and ecosystem partners.</p>\n<p>Speak at major conferences (Breakpoint, etc.) and contribute to protocol governance.</p>\n<p>Be Coinbase&#39;s voice on Solana staking.</p>\n<p><strong>Lead Technical Execution</strong></p>\n<p>Write production code.</p>\n<p>Design and build critical infrastructure for validator operations, monitoring, automation, and reliability.</p>\n<p>Set the technical bar for the team , code reviews, architecture decisions, incident response.</p>\n<p><strong>Expand Beyond Staking</strong></p>\n<p>Serve as a technical advisor on non-staking Solana initiatives where deep protocol knowledge is required (e.g., Solana tax infrastructure, token programs, new Solana-based products).</p>\n<p><strong>Mentor and Scale the Team</strong></p>\n<p>Elevate a team of strong engineers (IC4-IC5) through mentorship, architectural guidance, and raising the bar on Solana-specific domain expertise.</p>\n<p>Define what great Solana engineering looks like at Coinbase.</p>\n<p><strong>Requirements</strong></p>\n<p><strong>Deep Solana Protocol Expertise</strong></p>\n<p>You have extensive, hands-on experience with Solana&#39;s architecture , Eg: the runtime, validator mechanics, staking economics, consensus (Tower BFT), turbine, Gulf Stream, and the validator client ecosystem.</p>\n<p>You understand Solana at the source-code level, not just the API level.</p>\n<p><strong>Technical Authority &amp; Execution</strong></p>\n<p>You are a strong IC7-caliber engineer.</p>\n<p>You design and build complex distributed systems.</p>\n<p>You write production code in Rust and/or Go.</p>\n<p>You have deep experience with infrastructure at scale , bare metal, cloud, networking, observability.</p>\n<p><strong>Strategic Vision</strong></p>\n<p>You can define year-long technical strategies and connect them to business goals.</p>\n<p>You break down ambiguous, large-scope problems into executable plans with measurable milestones.</p>\n<p>You think in terms of competitive advantage, not just engineering correctness.</p>\n<p><strong>Ecosystem Presence &amp; Influence</strong></p>\n<p>You are a known figure in the Solana ecosystem.</p>\n<p>You have existing relationships with the Solana Foundation, core contributor teams, and major validators.</p>\n<p>You have a track record of public speaking, community engagement, or protocol governance participation.</p>\n<p><strong>Cross-Functional Leadership</strong></p>\n<p>You partner effectively with product, business, and executive stakeholders.</p>\n<p>You translate complex protocol dynamics into business-relevant terms for non-technical audiences.</p>\n<p>You drive alignment across multiple teams and functions.</p>\n<p><strong>Passion for Solana</strong></p>\n<p>This isn&#39;t a role for a generalist who happens to know some Solana.</p>\n<p>You are genuinely passionate about the Solana ecosystem, follow protocol developments closely, and have a strong thesis on where Solana staking is headed.</p>\n<p><strong>Ability to Responsibly Use Generative AI Tools</strong></p>\n<p>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>\n<p><strong>Nice to Have</strong></p>\n<p><strong>Core Contributor to Solana Validator Clients</strong></p>\n<p>Core contributor to Solana validator clients (Agave, Firedancer) or significant Solana ecosystem projects.</p>\n<p><strong>Experience Operating in Highly Regulated Industries</strong></p>\n<p>Experience operating in highly regulated industries or security-first cultures.</p>\n<p><strong>Background in Financial Services</strong></p>\n<p>Background in financial services, fintech, or crypto custody.</p>\n<p><strong>Track Record of Publishing Technical Content</strong></p>\n<p>Track record of publishing technical content (blog posts, research, conference talks) on Solana or Blockchain in general.</p>\n<p><strong>Experience with Solana&#39;s Evolving Staking Landscape</strong></p>\n<p>Experience with Solana&#39;s evolving staking landscape , liquid staking, stake pools, restaking protocols.</p>\n<p><strong>Familiarity with Other PoS Protocol Staking Operations</strong></p>\n<p>Familiarity with other PoS protocol staking operations (Ethereum, Cosmos ecosystem) for comparative perspective.</p>\n<p><strong>Pay Transparency Notice</strong></p>\n<p>Depending on your work location, the target annual base salary for this position can range as detailed below.</p>\n<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_78ae8204-779","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7684298","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Solana","Rust","Go","Distributed Systems","Cloud Infrastructure","Networking","Observability","Validator Operations","Staking Integrations","Protocol Evolution","Cross-Product Integration","Technical Leadership","Strategic Vision","Competitive Advantage","Business Goals","Executable Plans","Milestones","Alignment","Multiple Teams","Functions","Passion for Solana","Generative AI Tools","Copilots","Human-in-the-Loop Practices","Efficiency","Cost","Quality"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:21.451Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Solana, Rust, Go, Distributed Systems, Cloud Infrastructure, Networking, Observability, Validator Operations, Staking Integrations, Protocol Evolution, Cross-Product Integration, Technical Leadership, Strategic Vision, Competitive Advantage, Business Goals, Executable Plans, Milestones, Alignment, Multiple Teams, Functions, Passion for Solana, Generative AI Tools, Copilots, Human-in-the-Loop Practices, Efficiency, Cost, Quality"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d8a17638-e52"},"title":"Account Executive NATO","description":"<p>Elastic, the Search AI company, is looking for an Enterprise Account Executive to drive net-new revenue and expansion within our developing relationship with NATO, Brussels. You&#39;ll be the owner of this unique customer where you&#39;ll build your own pipeline and close engagements - telling the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>As an Enterprise Account Executive, you will:</p>\n<ul>\n<li>Own your customer &amp; build pipeline and close any ongoing engagements.</li>\n<li>Deep discovery &amp; qualification: Uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals.</li>\n<li>Value storytelling &amp; demos: Craft and deliver tailored narratives and live demos that map Elastic’s Search, Observability, and Security capabilities to measurable business outcomes.</li>\n<li>Mutual deal strategy &amp; forecast accuracy: Collaborate with your customer to build formal close plans and keep your CRM up-to-date.</li>\n<li>Executive negotiation &amp; closing: Lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments.</li>\n<li>Domain &amp; cloud acumen: Position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</li>\n<li>Cross-functional partnership: Work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</li>\n</ul>\n<p>We&#39;re looking for someone with:</p>\n<ul>\n<li>Proven experience of working with or for NATO with existing relationships.</li>\n<li>Expert discovery &amp; qualification skills: Demonstrated ability to apply MEDDPICC or equivalent frameworks to drive disciplined pipeline and eliminate low-probability deals.</li>\n<li>Compelling value storytellers: Track record of delivering executive-level presentations and demos that tie product capabilities to real dollars saved, revenue gained, or risk mitigated.</li>\n<li>Technical &amp; cloud fluency: Comfortable discussing a broad range of technical topics including observability, security, vector/traditional search, and cloud cost optimization.</li>\n<li>Collaborative mindset &amp; coachability: A learner who partners effectively with internal teams, incorporates feedback, and embodies Elastic’s values of community and openness.</li>\n<li>Open Source enthusiasm: Genuine appreciation for open-source communities and the Elastic model.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Prior experience of projects with geospatial content.</li>\n<li>Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d8a17638-e52","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7668021","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["MEDDPICC","Search AI","Observability","Security","Cloud economics","Usage-based pricing","Modern data architectures","Cloud cost optimization"],"x-skills-preferred":["Geospatial content","Observability (logs, metrics, traces)","Security analytics (SIEM/XDR)"],"datePosted":"2026-04-18T15:56:07.441Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgium"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"MEDDPICC, Search AI, Observability, Security, Cloud economics, Usage-based pricing, Modern data architectures, Cloud cost optimization, Geospatial content, Observability (logs, metrics, traces), Security analytics (SIEM/XDR)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe04c8cc-782"},"title":"Forward Deployed Engineering Manager","description":"<p>Shape the Future of AI</p>\n<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>\n<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>\n<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>\n<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>\n<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>\n<p>Why Join Us</p>\n<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>\n<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>\n<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>\n<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>\n<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>\n<p>The role</p>\n<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>\n<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>\n<p>What You’ll Do</p>\n<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>\n<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>\n<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>\n<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>\n<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>\n<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>\n<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>\n<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>\n<p>What We’re Looking For</p>\n<p>5+ years of software engineering experience (Python)</p>\n<p>2+ years of experience managing or leading engineers in fast-paced environments</p>\n<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>\n<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>\n<p>Background in infrastructure, developer tooling, or distributed systems</p>\n<p>Strong debugging skills and systems thinking across layered, containerized environments</p>\n<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>\n<p>Excellent communication and stakeholder management skills</p>\n<p>Preferred</p>\n<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>\n<p>Familiarity with cloud infrastructure (GCP or AWS)</p>\n<p>Prior experience in AI/ML platforms, data companies, or research environments</p>\n<p>Contributions to open-source projects in RL, agents, or developer tooling</p>\n<p>Why This Role Matters</p>\n<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>\n<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>\n<p>About Alignerr</p>\n<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>\n<p>Life at Labelbox</p>\n<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>\n<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>\n<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>\n<p>Growth: Career advancement opportunities directly tied to your impact</p>\n<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>\n<p>Our Vision</p>\n<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>\n<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>\n<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe04c8cc-782","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Labelbox","sameAs":"https://www.labelbox.com/","logo":"https://logos.yubhub.co/labelbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/labelbox/jobs/5101195007","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$220,000 USD","x-skills-required":["Software engineering experience (Python)","Containerization and sandboxing (Docker, Firecracker, or similar)","Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)","Infrastructure, developer tooling, or distributed systems","Debugging skills and systems thinking"],"x-skills-preferred":["Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)","Familiarity with cloud infrastructure (GCP or AWS)","Prior experience in AI/ML platforms, data companies, or research environments","Contributions to open-source projects in RL, agents, or developer tooling"],"datePosted":"2026-04-18T15:56:05.491Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_baad2598-8bc"},"title":"Staff / Senior Software Engineer, Compute Capacity","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s Accelerator Capacity Engineering (ACE) team manages one of the largest and fastest-growing accelerator fleets in the industry. As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on.</p>\n<p><strong>What This Team Owns</strong></p>\n<p>The team’s work spans three functional areas: data infrastructure, fleet observability, and compute efficiency. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>\n<p><strong>Data Infrastructure</strong></p>\n<p>Collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against.</p>\n<p><strong>Fleet Observability</strong></p>\n<p>Making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation.</p>\n<p><strong>Compute Efficiency</strong></p>\n<p>Measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery.</li>\n<li>Develop and maintain observability infrastructure , Prometheus recording rules, Grafana dashboards, and alerting systems , that surface actionable signals about fleet health, occupancy, and efficiency.</li>\n<li>Instrument and analyze compute efficiency metrics across training, inference, and eval workloads.</li>\n<li>Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging.</li>\n<li>Operate Kubernetes-native systems at scale , deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>\n<li>Normalize and reconcile data across heterogeneous sources , including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>\n</ul>\n<p><strong>You May Be a Good Fit If You Have</strong></p>\n<ul>\n<li>5+ years of software engineering experience with a strong track record building and operating production systems.</li>\n<li>Kubernetes fluency at operational depth , you’ve operated production K8s at meaningful scale, not just written manifests.</li>\n<li>Data pipeline engineering experience , designing, building, and owning the full lifecycle of production data pipelines.</li>\n<li>Observability tooling experience , Prometheus, PromQL, and Grafana are in the critical path for this team.</li>\n<li>Python and SQL at production quality.</li>\n<li>Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level , compute, billing, usage APIs, cost management tooling.</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Multi-cloud data ingestion experience , especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats.</li>\n<li>Accelerator infrastructure familiarity , GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.</li>\n<li>Performance engineering and benchmarking experience , building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.</li>\n<li>Data-as-product thinking , experience building internal data products with self-service access, schema contracts, API serving, documentation,</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_baad2598-8bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5126702008","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Python","SQL","Prometheus","Grafana","BigQuery","Cloud computing","Data pipeline engineering","Observability tooling"],"x-skills-preferred":["Multi-cloud data ingestion","Accelerator infrastructure","Performance engineering","Data-as-product thinking"],"datePosted":"2026-04-18T15:56:02.706Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Python, SQL, Prometheus, Grafana, BigQuery, Cloud computing, Data pipeline engineering, Observability tooling, Multi-cloud data ingestion, Accelerator infrastructure, Performance engineering, Data-as-product thinking"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7bc4518a-7e3"},"title":"AI Applications Ops Lead, GPS","description":"<p><strong>Role Overview</strong></p>\n<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for national LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.</li>\n</ul>\n<ul>\n<li>Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.</li>\n</ul>\n<ul>\n<li>Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.</li>\n</ul>\n<ul>\n<li>Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.</li>\n</ul>\n<ul>\n<li>Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.</li>\n</ul>\n<ul>\n<li>Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.</li>\n</ul>\n<ul>\n<li>Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.</li>\n</ul>\n<ul>\n<li>Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.</li>\n</ul>\n<ul>\n<li>System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.</li>\n</ul>\n<ul>\n<li>Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.</li>\n</ul>\n<ul>\n<li>Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.</li>\n</ul>\n<ul>\n<li>Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.</li>\n</ul>\n<ul>\n<li>Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.</li>\n</ul>\n<p><strong>About Us</strong></p>\n<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7bc4518a-7e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4654510005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Vector databases","Agentic development","LLM observability tools","SRE","FDE","MLOps"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:02.011Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar; London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Vector databases, Agentic development, LLM observability tools, SRE, FDE, MLOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d11da63-16c"},"title":"Public Sector Account Executive (Central Government)","description":"<p>We&#39;re seeking a Public Sector Account Executive to join our team in the UK. As a Public Sector Account Executive, you will be responsible for generating and developing pipeline through a disciplined multi-channel, multi-touch prospecting approach. You will act as a hunter, identifying new opportunities across departments and agencies and building relationships with both senior leaders and technical practitioners. You will lead structured discovery conversations to understand mission needs, data challenges, and operational priorities within government organisations. You will position Elastic&#39;s capabilities across Search AI, Observability, and Security to help departments improve digital services, strengthen security posture, and unlock the value of their data. You will work closely with solutions architects, partners, and customer success teams to develop strategies that address complex public sector challenges. You will expand Elastic&#39;s footprint within accounts through strategic land-and-expand motions, identifying new use cases and opportunities. You will maintain accurate pipeline management and forecasting within Salesforce. You will collaborate across Elastic teams to ensure we deliver meaningful outcomes for customers and grow our presence across government.</p>\n<p>We&#39;re looking for someone with 3 years+ experience selling into the UK Public Sector, ideally with exposure to central government departments such as Department for Transport, Defra, or devolved governments. You should have a hunter mentality with strong energy, resilience, and drive to build pipeline and create new opportunities. You should have curiosity and creativity in tackling complex government challenges involving data, security, and digital transformation. You should have strong business and technical curiosity, with the ability to engage both senior stakeholders and technical practitioners. You should have a collaborative mindset with the ability to work effectively across distributed teams. You should have a structured and disciplined approach to sales, combined with the ability to think creatively and challenge conventional approaches. You should be motivated to succeed in a fast-moving, ambitious environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d11da63-16c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7728182","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["prospecting","pipeline development","sales strategy","customer success","public sector sales","government sales","data security","digital transformation"],"x-skills-preferred":["search AI","observability","security","solution architecture","partnerships","customer engagement"],"datePosted":"2026-04-18T15:55:52.026Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"prospecting, pipeline development, sales strategy, customer success, public sector sales, government sales, data security, digital transformation, search AI, observability, security, solution architecture, partnerships, customer engagement"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eff95313-cdc"},"title":"Senior Site Reliability Engineer","description":"<p>The Senior Site Reliability Engineer will play a key role in developing scalable, reliable, and efficient infrastructure that powers the entire company. This includes building and scaling internal platform offerings, designing and implementing monitoring, alerting, and incident response systems, and collaborating with application software engineers to guide their design and ensure it scales for what Carta needs in the long run.</p>\n<p>The ideal candidate will have extensive experience with cloud services such as AWS, Google Cloud Platform, or Azure, including services like EC2, S3, RDS, and Lambda. They will also be proficient in using tools such as Terraform, Ansible, or CloudFormation for managing and provisioning cloud infrastructure.</p>\n<p>The team is responsible for providing secure, reliable, scalable, and performant infrastructure to Carta&#39;s customers and developers. The successful candidate will be a strong communicator who enjoys collaborating to solve complex problems and has familiarity with infrastructure best practices on performance, reliability, and security and their associated tools.</p>\n<p>Our stack is Python, Java, Terraform, gRPC, Docker, Kubernetes, Postgres, running on AWS. Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eff95313-cdc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Carta","sameAs":"https://carta.com/","logo":"https://logos.yubhub.co/carta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/carta/jobs/7688689003","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$181,688 - $225,000","x-skills-required":["Cloud Platforms","Infrastructure as Code (IaC)","Networking","Monitoring and Observability","Software Development","API Services","AI Fluency"],"x-skills-preferred":["Experience operating CI/CD and its associated best practices"],"datePosted":"2026-04-18T15:55:48.770Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California; Santa Clara, California; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Platforms, Infrastructure as Code (IaC), Networking, Monitoring and Observability, Software Development, API Services, AI Fluency, Experience operating CI/CD and its associated best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":181688,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_067a9092-157"},"title":"Manager, Software Engineering - Observability","description":"<p>We are seeking a Manager, Software Engineering - Observability to lead our team of engineers responsible for the reliability, scalability, and evolution of Figma&#39;s observability and cost engineering platforms.</p>\n<p>As a key member of our engineering team, you will own and operate Figma&#39;s core observability stack, including vendor platforms such as Datadog, ensuring high availability, strong data quality, and effective signal-to-noise across metrics, logs, and traces.</p>\n<p>You will define and drive the technical strategy for instrumentation standards, observability libraries, agents, and operators used to monitor internal and external facing services. You will also explore and implement innovative, AI-driven approaches to anomaly detection, root cause analysis, signal correlation, and operational automation.</p>\n<p>In addition, you will establish clear frameworks for cost attribution, budgeting, forecasting, and alerting across infrastructure and observability spend, enabling teams to make informed tradeoffs.</p>\n<p>You will partner with infrastructure, product engineering, finance, and security teams to improve visibility into system health and cost efficiency at scale.</p>\n<p>You will lead initiatives to optimize observability footprint and spend, balancing depth of insight with performance and cost considerations.</p>\n<p>You will coach and mentor engineers through career development, performance feedback, and technical leadership, fostering a culture of ownership, collaboration, and high-quality execution.</p>\n<p>We are looking for someone with 4+ years of experience leading infrastructure, observability, or platform engineering teams, with a track record of delivering highly reliable production systems.</p>\n<p>You should have deep hands-on experience with modern observability platforms (e.g., Datadog, OpenTelemetry) across metrics, logs, and distributed tracing.</p>\n<p>You should have a strong understanding of distributed systems, instrumentation best practices, SLO design, and incident response workflows.</p>\n<p>Experience driving cost transparency and accountability initiatives, including cost attribution, budgeting, forecasting, and alerting in cloud environments is also required.</p>\n<p>Preferred skills include experience designing or evolving company-wide observability standards, shared libraries, and agent/operator-based integrations, background in cost optimization for infrastructure or observability tooling, including vendor negotiations and usage modeling, and experience applying AI or machine learning techniques to anomaly detection, root cause analysis, or operational automation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_067a9092-157","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5807963004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$258,000-$376,000 USD","x-skills-required":["observability","datadog","opentelemetry","distributed systems","instrumentation best practices","slo design","incident response workflows","cost transparency","accountability initiatives","cost attribution","budgeting","forecasting","alerting"],"x-skills-preferred":["designing or evolving company-wide observability standards","shared libraries","agent/operator-based integrations","cost optimization for infrastructure or observability tooling","vendor negotiations","usage modeling","applying ai or machine learning techniques to anomaly detection","root cause analysis","operational automation"],"datePosted":"2026-04-18T15:55:20.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, datadog, opentelemetry, distributed systems, instrumentation best practices, slo design, incident response workflows, cost transparency, accountability initiatives, cost attribution, budgeting, forecasting, alerting, designing or evolving company-wide observability standards, shared libraries, agent/operator-based integrations, cost optimization for infrastructure or observability tooling, vendor negotiations, usage modeling, applying ai or machine learning techniques to anomaly detection, root cause analysis, operational automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":258000,"maxValue":376000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5330dcfc-ef9"},"title":"Regional Ocean Manager","description":"<p>As Regional Ocean Manager, you will be responsible for managing the Ocean P&amp;L for France, Spain, and Italy. This includes scaling volume and increasing yield, providing market-specific intelligence, and designing Ocean FCL products. You will also be responsible for generating customized go-to-market strategies for customers, analyzing P&amp;L, and creating reports for your regions. Additionally, you will help with tenders and RFQs, interface with procurement and pricing teams, and identify and drive improvement in go-to-market, operational, and financial processes.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Managing the Ocean P&amp;L for your region through scaling volume and increasing yield</li>\n<li>Providing market-specific intelligence and directly interfacing with customers</li>\n<li>Designing Ocean FCL products and generating customized go-to-market strategies for customers</li>\n<li>Analyzing the P&amp;L and creating reports for your regions</li>\n<li>Helping with tenders and RFQs for your district to secure regional and global customers</li>\n<li>Interfacing with procurement and pricing teams to drive profitable business, identify growth opportunities, and capture market-specific knowledge</li>\n<li>Identifying and driving improvement in go-to-market, operational, and financial processes</li>\n<li>Driving volume forecasting and allocation strategies for your region</li>\n<li>Participating in P&amp;L reviews and book of business growth rhythms</li>\n<li>Partnering with commercial teams to understand and translate customers&#39; growth potential and levers</li>\n<li>Driving financial health levers for your district including quoting, invoicing, and dispute escalation</li>\n<li>Building and maintaining local relationships with our core ocean carriers</li>\n</ul>\n<p>Prerequisites include 5+ years of prior Ocean Freight experience at a top freight forwarder or carrier or 5+ years of supply chain experience and a BA/BS Degree. Mandatory experience in logistics pricing and procurement role, proficient English, French + Spanish or Italian mandatory, ability to dig out and interpret data, past P&amp;L management experience preferred, bias to Action, Process, Structure, excellent communication, interpersonal, and organizational skills, an obsession with client happiness, courage to challenge the status quo when logic and reason require it, be positive, have fun, take your mission seriously but yourself not too seriously, and a driving license.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5330dcfc-ef9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Flexport","sameAs":"https://www.flexport.com/","logo":"https://logos.yubhub.co/flexport.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/flexport/jobs/7785045","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ocean Freight experience","logistics pricing and procurement role","English","French","Spanish or Italian","data interpretation","P&L management"],"x-skills-preferred":["bias to Action, Process, Structure","excellent communication, interpersonal, and organizational skills","obsession with client happiness","courage to challenge the status quo when logic and reason require it"],"datePosted":"2026-04-18T15:55:14.451Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Logistics","skills":"Ocean Freight experience, logistics pricing and procurement role, English, French, Spanish or Italian, data interpretation, P&L management, bias to Action, Process, Structure, excellent communication, interpersonal, and organizational skills, obsession with client happiness, courage to challenge the status quo when logic and reason require it"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccb5daf2-354"},"title":"Sr. ML Ops Engineer, tvScientific","description":"<p>We&#39;re looking for a Senior MLOps Engineer to join our distributed engineering team on our Connected TV ad-buying platform. As a Senior MLOps Engineer, you will be responsible for scaling the decision-making process for tools for the tvScientific AI team, improving the developer experience for the data science team, upgrading our observability tooling, serving as a technical lead and mentor to the team, and making every deployment smooth as our infrastructure evolves.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Scaling the decision-making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>\n<li>Improving the developer experience for the data science team</li>\n<li>Upgrading our observability tooling</li>\n<li>Serving as a technical lead and mentor to the team</li>\n<li>Making every deployment smooth as our infrastructure evolves</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Deep understanding of Linux</li>\n<li>Excellent writing skills</li>\n<li>A systems-oriented mindset</li>\n<li>Experience in high-performance software (RTB, HFT, etc.)</li>\n<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>\n<li>Strong observability instincts</li>\n<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>\n<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>\n<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Reverse-engineering experience</li>\n<li>Terraform, EKS, or MLOps experience</li>\n<li>Python, Scala, or Zig experience</li>\n<li>NixOS experience</li>\n<li>Adtech or CTV experience</li>\n<li>Experience deploying a distributed system across multiple clouds</li>\n<li>Experience in hard real-time low-latency</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccb5daf2-354","directApply":true,"hiringOrganization":{"@type":"Organization","name":"tvScientific","sameAs":"https://www.tvscientific.com/","logo":"https://logos.yubhub.co/tvscientific.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7642249","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155,584-$320,320 USD","x-skills-required":["Linux","writing skills","systems-oriented mindset","high-performance software","software engineering","reliability","observability","AI","critical evaluation","verification","data protection","data validation","peer review"],"x-skills-preferred":["reverse-engineering","Terraform","EKS","MLOps","Python","Scala","Zig","NixOS","adtech","CTV","distributed system","hard real-time low-latency"],"datePosted":"2026-04-18T15:55:03.102Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, writing skills, systems-oriented mindset, high-performance software, software engineering, reliability, observability, AI, critical evaluation, verification, data protection, data validation, peer review, reverse-engineering, Terraform, EKS, MLOps, Python, Scala, Zig, NixOS, adtech, CTV, distributed system, hard real-time low-latency","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155584,"maxValue":320320,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3354c9a5-d45"},"title":"Senior Value Engineer","description":"<p>Are you looking to make a real impact and play a meaningful role in the growth of our company? Elastic is seeking a Value Engineer to support our US territories, located on the east coast (preferred) or Central region.</p>\n<p>Now is an exciting time to join Elastic; you&#39;ll be joining at a pivotal stage where your expertise will be instrumental in building the foundations for our future success with our strategic clients. Your role is crucial in ensuring our clients unlock the full business impacts of Elastic, driving value and fostering long-term partnerships.</p>\n<p>We’re seeking individuals who thrive in dynamic environments, have a passion for customer excellence, delivering exceptional outcomes, and are ready to collaborate with a team of high-performing sales professionals. If you have experience working as a Value Engineer, Management Consultant, or a related customer-facing role, we would love to hear from you!</p>\n<p><strong>Responsibilities:</strong> Collaborate closely with account teams to identify and execute value delivery strategies for key accounts. Engage with stakeholders to identify, understand, and quantify their unique business challenges. Translate Elastic’s value proposition and technical capabilities into clear economic benefits aligned with their corporate goals. Lead business value reviews / business cases: Effectively communicate the realized value and potential economic benefits of new opportunities. Conduct thorough analysis of customer workflows and processes, identifying opportunities for optimization and increased efficiency. Create and deliver boardroom-quality, executive documents. Work collaboratively with product and engineering teams to tailor solutions to meet the specific needs of Elastic’s customers. Provide guidance on best practices and innovative approaches to maximize the value of Elastic solutions. Act as a subject matter expert on our solutions with our customers, remaining informed about industry trends and advancements. Conduct training and workshops with Elastic teammates to increase the quality of their discovery and their understanding of business value &amp; customer excellence. Help build the future of the value engineering practice at Elastic: contribute to the development of best practices and tools for an enhanced value-based selling adoption.</p>\n<p><strong>Requirements:</strong> 3+ years of experience in value engineering, management consulting, solution architecture, or a related customer-facing role within the software or consulting industries. Expert knowledge and experience in change process management, search, observability, and security technologies preferred. Advanced knowledge and experience in value-based-selling methodologies. Demonstrated experience in building and briefing business cases to executives. Expertise in identifying and prioritizing use cases, implementing improvement measures and becoming a change agent for Elastic customers by establishing a value delivery model. Ability to project manage across multiple workstreams, including defining scope, expectations, timelines and delivery. Strong analytical and problem-solving skills; ability to act decisively with ambiguous guidance and circumstances. Ability to travel to meet clients is required (approximately 1x/month).</p>\n<p><strong>Bonus Points:</strong> New York City or Arlington, VA area and open to occasional in-office collaboration (preferred).</p>\n<p><strong>Compensation:</strong> The typical starting salary range for this role is $122,800-$194,400 USD. The typical starting Target Variable range for this role is $30,700-$48,500 USD. The typical starting On-Target Earnings (OTE) range for this role is $153,500-$242,900 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3354c9a5-d45","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7599935","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,800-$194,400 USD","x-skills-required":["value engineering","management consulting","solution architecture","change process management","search","observability","security technologies","value-based-selling methodologies","business case development","project management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:01.444Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"value engineering, management consulting, solution architecture, change process management, search, observability, security technologies, value-based-selling methodologies, business case development, project management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122800,"maxValue":194400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_491db8e9-776"},"title":"Staff Site Reliability Engineer- Splunk Expert","description":"<p>We are seeking a highly technical Staff Site Reliability Engineer with deep expertise in Splunk and Grafana to own and evolve our observability ecosystem.</p>\n<p>As a Staff Site Reliability Engineer, you will move beyond simple monitoring to architect a comprehensive, scalable telemetry platform. You will be our subject-matter expert in Splunk optimisation, ensuring our logging architecture is performant, cost-effective, and deeply integrated with our automated workflows.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Splunk Architecture &amp; Optimisation: Lead the design and tuning of Splunk environments. Optimise indexer performance, search efficiency, and data models to ensure rapid troubleshooting and cost-efficiency.</li>\n</ul>\n<ul>\n<li>Advanced Visualisation: Architect and maintain sophisticated Grafana dashboards that correlate disparate data sources into a single pane of glass for real-time system health.</li>\n</ul>\n<ul>\n<li>Automated Infrastructure: Design, build, and maintain scalable observability infrastructure using tools like Terraform.</li>\n</ul>\n<ul>\n<li>Pipeline Engineering: Optimise the collection, processing, and storage of telemetry data (Metrics, Logs, Traces) to ensure high reliability and low latency.</li>\n</ul>\n<ul>\n<li>Workflow Automation: Develop custom Splunk workflows and integrations that trigger automated responses to system events, reducing Mean Time to Resolution (MTTR).</li>\n</ul>\n<ul>\n<li>Incident Response: Participate in on-call rotations and lead post-incident reviews to drive systemic improvements through &#39;observability-driven development.&#39;</li>\n</ul>\n<p>Required skills and experience include:</p>\n<ul>\n<li>Splunk Mastery: Deep, hands-on experience with Splunk administration, search optimisation (SPL), and architecting complex data pipelines.</li>\n</ul>\n<ul>\n<li>Grafana Expertise: Proven ability to build actionable, intuitive dashboards in Grafana that go beyond simple charts to provide deep operational insights.</li>\n</ul>\n<ul>\n<li>SRE Mindset: Minimum 8+ years of experience in an SRE, DevOps, or Systems Engineering role with a focus on high-availability systems.</li>\n</ul>\n<ul>\n<li>Programming Proficiency: Strong coding skills in Go, Python, or Ruby for building internal tools and automating observability workflows.</li>\n</ul>\n<ul>\n<li>Telemetry Standards: Hands-on experience with OpenTelemetry (OTel), Prometheus, or similar frameworks for instrumenting applications.</li>\n</ul>\n<ul>\n<li>Distributed Systems: Deep understanding of Linux internals, networking (TCP/IP, DNS, Load Balancing), and container orchestration (Kubernetes/EKS).</li>\n</ul>\n<p>Bonus skills include:</p>\n<ul>\n<li>Tracing: Implementation of distributed tracing (Jaeger, Tempo, or Honeycomb) to visualise request flow across microservices.</li>\n</ul>\n<ul>\n<li>Security Observability: Experience using Splunk for security orchestration (SOAR) or SIEM-related workflows.</li>\n</ul>\n<ul>\n<li>Cloud Platforms: Experience managing observability native tools within AWS, Azure, or GCP.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_491db8e9-776","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/6874616","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Splunk","Grafana","SRE","Go","Python","Ruby","OpenTelemetry","Prometheus","Linux","Networking","Container Orchestration"],"x-skills-preferred":["Tracing","Security Observability","Cloud Platforms"],"datePosted":"2026-04-18T15:54:34.221Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Splunk, Grafana, SRE, Go, Python, Ruby, OpenTelemetry, Prometheus, Linux, Networking, Container Orchestration, Tracing, Security Observability, Cloud Platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2637f59-e14"},"title":"Full-Stack Software Engineer, Reinforcement Learning","description":"<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day.\\n\\nYou don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast. This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next.\\n\\nYou&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.\\n\\nAnthropic&#39;s Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We&#39;ve contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models.\\n\\nOur work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.\\n\\nThe RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.\\n\\nOur engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.\\n\\n### Responsibilities\\n\\n<em>   Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows\\n</em>   Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction\\n<em>   Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early\\n</em>   Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking\\n<em>   Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure\\n</em>   Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels\\n<em>   Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks\\n</em>   Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products\\n\\n### Requirements\\n\\n<em>   Strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend\\n</em>   Proficient in Python and a modern web stack (React, TypeScript, or similar)\\n<em>   Track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible\\n</em>   Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket\\n<em>   Found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it\\n</em>   Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers\\n<em>   Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work\\n</em>   Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before\\n<em>   Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it\\n\\n### Nice to Have\\n\\n</em>   Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types\\n<em>   Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows\\n</em>   Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines\\n<em>   Familiarity with LLM training, fine-tuning, or evaluation workflows\\n</em>   Experience with async Python (Trio, asyncio) or high-throughput API design\\n<em>   Background in dashboards, monitoring, or observability tooling\\n</em>   Experience working directly with external vendors or partners on technical integrations\\n<em>   A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope\\n\\n### Representative Projects\\n\\n</em>   Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks\\n<em>   Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation\\n</em>   Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training\\n<em>   Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments\\n</em>   Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training\\n*   Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2637f59-e14","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5186067008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Python","Modern web stack","React","TypeScript","Strong software engineering fundamentals","Full-stack range","Database schema","Frontend","Cloud infrastructure","Docker","CI/CD pipelines","LLM training","Fine-tuning","Evaluation workflows","Async Python","High-throughput API design","Dashboards","Monitoring","Observability tooling"],"x-skills-preferred":["Data collection","Labeling","Annotation platforms","Multi-tenant platforms","Role-based access","Audit trails","Vendor management workflows"],"datePosted":"2026-04-18T15:54:27.784Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Modern web stack, React, TypeScript, Strong software engineering fundamentals, Full-stack range, Database schema, Frontend, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation platforms, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0df50e1-9cd"},"title":"Consultant, Developer Platform","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a Cloud Engineer for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>\n</ul>\n<ul>\n<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>\n</ul>\n<ul>\n<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>\n</ul>\n<ul>\n<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>\n</ul>\n<ul>\n<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>\n</ul>\n<ul>\n<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>\n</ul>\n<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>\n</ul>\n<ul>\n<li>Demonstrated experience with:</li>\n</ul>\n<ul>\n<li>Developing serverless code in a CI/CD pipeline using an Agile methodology.</li>\n</ul>\n<ul>\n<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP.</li>\n</ul>\n<ul>\n<li>Scripting languages.</li>\n</ul>\n<ul>\n<li>A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills.</li>\n</ul>\n<ul>\n<li>Infrastructure as code tools like Terraform.</li>\n</ul>\n<ul>\n<li>Strong experience with APIs.</li>\n</ul>\n<ul>\n<li>CI/CD pipelines using Azure DevOps or Git.</li>\n</ul>\n<ul>\n<li>Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc.</li>\n</ul>\n<ul>\n<li>Good understanding and knowledge of:</li>\n</ul>\n<ul>\n<li>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs.</li>\n</ul>\n<ul>\n<li>Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP.</li>\n</ul>\n<ul>\n<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>You have worked with a Cybersecurity company or products and have performed migrations using migration tools.</li>\n</ul>\n<ul>\n<li>You have developed application security and performance capabilities.</li>\n</ul>\n<ul>\n<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</li>\n</ul>\n<ul>\n<li>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0df50e1-9cd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7383015","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Developing serverless code in a CI/CD pipeline using an Agile methodology","Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP","Scripting languages","Infrastructure as code tools like Terraform","Strong experience with APIs","CI/CD pipelines using Azure DevOps or Git","Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc","Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs","Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP","Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"],"x-skills-preferred":["You have worked with a Cybersecurity company or products and have performed migrations using migration tools","You have developed application security and performance capabilities","Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty","The work will be performed in English. Fluency in a second regional European language is a strong advantage"],"datePosted":"2026-04-18T15:54:26.532Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3, You have worked with a Cybersecurity company or products and have performed migrations using migration tools, You have developed application security and performance capabilities, Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty, The work will be performed in English. Fluency in a second regional European language is a strong advantage"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2ace8872-f7e"},"title":"Manager, Backline (Platform)","description":"<p>At Databricks, we are seeking a Manager, Backline (Platform) to join our team. As a critical bridge between Engineering and Frontline Support, the Backline Engineering Team handles complex technical issues and escalations across the Apache Spark ecosystem and the Databricks Platform stack. With a strong focus on customer success, we are committed to delivering exceptional customer satisfaction by providing deep technical expertise, proactive issue resolution, and continuous improvements to the platform.</p>\n<p>The Manager, Backline (Platform) will be responsible for:</p>\n<ul>\n<li>Hiring and developing top talent to build an outstanding team</li>\n<li>Mentoring engineers, providing clear feedback, and developing future leaders in the team</li>\n<li>Establishing and maintaining high standards in troubleshooting, automation, and tooling to improve efficiency</li>\n<li>Working closely with Engineering to enhance observability, debugging tools, and automation, reducing escalations</li>\n<li>Collaborating with Frontline Support, Engineering, and Product teams to improve customer escalations and support processes</li>\n<li>Defining a long-term roadmap for Backline, focusing on automation, tool development, bug fixing, and proactive issue resolution</li>\n<li>Taking ownership of high-impact customer escalations by leading critical incident response during Databricks runtime outages and major incidents</li>\n<li>Participating in weekday and weekend on-call rotations, ensuring fast and effective resolution of urgent issues</li>\n</ul>\n<p>We look for candidates with 10-12 years of industry experience, at least 3+ years in a managerial role, and strong technical expertise in one of the following domains: Linux/OS and Network troubleshooting, AWS, Azure, or GCP Cloud and related services, SQL-based database systems, or Python and/or Java-based applications.</p>\n<p>If you are a motivated and experienced professional with a passion for delivering exceptional customer satisfaction, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2ace8872-f7e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7879639002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux/OS and Network troubleshooting","AWS, Azure, or GCP Cloud and related services","SQL-based database systems","Python and/or Java-based applications","Troubleshooting","Automation","Tooling","Observability","Debugging","Collaboration","Leadership"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:15.620Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux/OS and Network troubleshooting, AWS, Azure, or GCP Cloud and related services, SQL-based database systems, Python and/or Java-based applications, Troubleshooting, Automation, Tooling, Observability, Debugging, Collaboration, Leadership"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_190bd9e9-0d1"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>By joining this team, you’ll have a direct impact on the reliability and operational excellence of Anthropic’s research and product systems.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>\n</ul>\n<ul>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n</ul>\n<ul>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n</ul>\n<ul>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n</ul>\n<ul>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n</ul>\n<ul>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n</ul>\n<ul>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n</ul>\n<ul>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n</ul>\n<ul>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n</ul>\n<ul>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n</ul>\n<ul>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n</ul>\n<ul>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n</ul>\n<ul>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n</ul>\n<ul>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we’re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_190bd9e9-0d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5102440008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-04-18T15:54:10.425Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6b0282a9-9ee"},"title":"Staff Software Engineer, Observability","description":"<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>\n<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>\n<li>Develop and refine monitoring and alerting to enhance system reliability.</li>\n<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>\n<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>\n<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>\n<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>\n<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>\n<li>Proven track record of leading incident management and post-mortem analysis.</li>\n<li>Excellent problem-solving, analytical, and communication skills.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience running and scaling observability tools as a cloud provider.</li>\n<li>Experience administering large-scale kubernetes clusters.</li>\n<li>Deep understanding of data-streaming systems.</li>\n</ul>\n<p>The base salary range for this role is $188,000 to $250,000.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6b0282a9-9ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4577361006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $250,000","x-skills-required":["ClickHouse","Elastic","Loki","Victoria Metrics","Prometheus","Thanos","Grafana","Kubernetes","containerization","microservices architectures"],"x-skills-preferred":["Experience running and scaling observability tools as a cloud provider","Experience administering large-scale kubernetes clusters","Deep understanding of data-streaming systems"],"datePosted":"2026-04-18T15:54:03.521Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6f3a053e-c43"},"title":"Staff Software Engineer, AI Reliability Engineering","description":"<p>We&#39;re seeking a Staff Software Engineer to join our AI Reliability Engineering team. As a key member of our team, you will develop Service Level Objectives for large language model serving systems, design and implement monitoring and observability systems, and lead incident response for critical AI services.</p>\n<p>You will work closely with teams across Anthropic to improve reliability across our most critical serving paths. You will be responsible for making the systems that deliver Claude more robust and resilient, whether during an incident or collaborating on projects.</p>\n<p>To be successful in this role, you should have strong distributed systems, infrastructure, or reliability backgrounds. You should be curious and brave, comfortable jumping into unfamiliar systems during an incident and helping drive resolution even when you don&#39;t have deep expertise yet.</p>\n<p>You will be working on high-availability serving infrastructure across multiple regions and cloud providers. You will support the reliability of safeguard model serving, which is critical for both site reliability and Anthropic&#39;s safety commitments.</p>\n<p>If you&#39;re committed to creating reliable, interpretable, and steerable AI systems, and you&#39;re passionate about working on complex technical problems, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6f3a053e-c43","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5101169008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"€235.000-€295.000 EUR","x-skills-required":["distributed systems","infrastructure","reliability","Service Level Objectives","monitoring","observability","incident response","high-availability serving infrastructure","cloud providers"],"x-skills-preferred":["SRE","Production Engineer","chaos engineering","systematic resilience testing","AI-specific observability tools and frameworks","ML hardware accelerators","RDMA","InfiniBand"],"datePosted":"2026-04-18T15:53:59.220Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, infrastructure, reliability, Service Level Objectives, monitoring, observability, incident response, high-availability serving infrastructure, cloud providers, SRE, Production Engineer, chaos engineering, systematic resilience testing, AI-specific observability tools and frameworks, ML hardware accelerators, RDMA, InfiniBand"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f6ef3b1-c9b"},"title":"Technical Program Manager, Compute","description":"<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>\n<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>\n<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>\n<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>\n<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>\n<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>\n<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>\n<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>\n<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>\n<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>\n<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>\n<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>\n<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>\n<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>\n<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>\n<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>\n<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>\n<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>\n<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>\n<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f6ef3b1-c9b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5138044008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$290,000-$365,000 USD","x-skills-required":["Technical Program Management","Cloud Infrastructure","Cluster Management","Job Scheduling","Resource Orchestration","Compute Capacity Management","GPU or Accelerator Infrastructure","Observability for Infrastructure Systems","Capacity Planning"],"x-skills-preferred":["Kubernetes","Slurm","Borg","YARN","Custom Schedulers","Demand Forecasting","Cost Modeling","Hardware Lifecycle Management"],"datePosted":"2026-04-18T15:53:42.458Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Program Management, Cloud Infrastructure, Cluster Management, Job Scheduling, Resource Orchestration, Compute Capacity Management, GPU or Accelerator Infrastructure, Observability for Infrastructure Systems, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":290000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_afa714af-d4b"},"title":"Senior Data Scientist, Causal Inference","description":"<p>You will join a collaborative team of data scientists, analysts, engineers, product managers, and designers who build innovative products for Airbnb guests and hosts. Your core team will consist of data scientists within the Guest &amp; Host Data Science organization, who develop models, measurement frameworks, foundational data products, and intelligence to improve and shape Airbnb marketplace and product.</p>\n<p>As a senior data scientist on the team, you will directly impact the Airbnb user community and marketplace with the data products and insights you deliver. Your work may range from building models to understand the impact of supply and demand on marketplace outcomes, to applying causal inference methods to measure the impact of a new pricing guidance tool, to generating destination recommendations based on our understanding of guest travel preferences.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Building strong relationships with cross-functional partners across Product, Design, Engineering, and Analytics to drive collaboration and innovation.</li>\n<li>Contribute directly to the development and launch of data-driven products and models, leveraging AI tools to enhance efficiency and impact.</li>\n<li>Writing software in Python, SQL, or R to model, simulate, and measure the impact of new features, applying advanced causal inference techniques where necessary.</li>\n<li>Analyzing structured or unstructured data to uncover meaningful insights and craft actionable proposals that help shape strategy.</li>\n<li>Communicating learnings to leaders and stakeholders in a clear, compelling manner that drives informed data-driven decision making.</li>\n</ul>\n<p>Your Expertise:</p>\n<ul>\n<li>5+ years of experience with BS/masters degree, 2+ years of experience with PhD.</li>\n<li>Experience with experimentation, causal observational analysis, and machine learning techniques.</li>\n<li>Experience partnering with product, engineering, and design to enable data-driven model or product development.</li>\n<li>Ability to prototype, build, and scale derived data assets.</li>\n<li>Strong coding skills in SQL and either Python or R.</li>\n<li>Strong oral and written communication skills - an ability to communicate complex technical concepts to a non-technical audience.</li>\n<li>Work authorization (if applicable) - Travel requirements (if applicable)</li>\n</ul>\n<p>How We&#39;ll Take Care of You:</p>\n<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Pay Range:</p>\n<p>$179,000-$210,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_afa714af-d4b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7662244","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$179,000-$210,000 USD","x-skills-required":["experimentation","causal observational analysis","machine learning techniques","SQL","Python","R"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:31.683Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"experimentation, causal observational analysis, machine learning techniques, SQL, Python, R","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":179000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bec4e006-74f"},"title":"Consultant, Developer Platform","description":"<p>About the role: Cloudflare provides advisory and hands-on-keyboard implementation and migration services for enterprise customers. As a Consultant for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>\n<p>You are an expert in Developer Platform products or equivalent and will focus on building and deploying serverless applications with scale, performance, security and reliability leveraging: Workers, Workers KV, Workers AI, D1, R2, Images, and many other products.</p>\n<p>This position has working hours Monday to Friday 09:00 a.m. to 06:00 p.m. Occasionally, we support our customers during the weekends for specific changes that need to be done outside of their business hours. Travel is expected to be around 40%.</p>\n<p>Experience might include a combination of the skills below:</p>\n<ul>\n<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>\n<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>\n<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>\n<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>\n<li>Proven experience with Cloudflare or similar with Workers, Javascript/Typescript and Workers APIs.</li>\n<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>\n<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>\n</ul>\n<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>\n<p>Experience might include a combination of the skills below:</p>\n<ul>\n<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>\n<li>Demonstrated experience with:</li>\n</ul>\n<p>Developing serverless code in a CI/CD pipeline using an Agile methodology. Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP Scripting languages A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills. Infrastructure as code tools like Terraform. Strong experience with APIs. CI/CD pipelines using Azure DevOps or Git. Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc. Good understanding and knowledge of:</p>\n<p>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs. Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP. Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</p>\n<p>Strong advantage if:</p>\n<p>You have worked with a Cybersecurity company or products and have performed migrations using migration tools. You have developed application security and performance capabilities. Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</p>\n<p>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bec4e006-74f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7383013","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Developing serverless code in a CI/CD pipeline using an Agile methodology","Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP","Scripting languages","Infrastructure as code tools like Terraform","Strong experience with APIs","CI/CD pipelines using Azure DevOps or Git","Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc","Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs","Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP","Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:29.137Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_403df3ce-795"},"title":"Principal Product Manager, Gusto Pro","description":"<p>About Gusto</p>\n<p>At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers.</p>\n<p>With teams in Denver, San Francisco, and New York, we support more than 400,000 small businesses nationwide and are building a workplace that reflects the people we serve. All full-time employees receive competitive base pay, benefits, and equity (RSUs) , because everyone who helps build Gusto should share in its success. Offer amounts are determined by role, level, and location. Learn more about our Total Rewards philosophy.</p>\n<p>AI is a fundamental part of how work gets done at Gusto. We expect all team members to actively engage with AI tools relevant to their role and grow their fluency as the technology evolves. AI experience requirements vary by role and will be assessed during the interview process.</p>\n<p>By the Numbers:</p>\n<ul>\n<li>Named #1 best software for small business of 2024 by G2</li>\n<li>2,700+ employees in the United States, Canada, Mexico, and Turkiye and growing</li>\n<li>Over $500M in annual revenue</li>\n</ul>\n<p>What Product Management is like at Gusto:</p>\n<ul>\n<li>Our Product team is lean, which means you’ll have a high degree of impact and ownership. We believe in smaller, empowered teams that move quickly with less overhead. You’ll move fast by pairing sharp product judgment with fluency in AI tools , automating what slows teams down and amplifying what makes them creative and high-performing.</li>\n</ul>\n<ul>\n<li>We’re boundaryless builders. Lines between roles are intentionally blurred, and our PMs do whatever it takes to deliver outcomes. You’ll prototype, automate, design, and ship , using AI as your co-builder to turn problems into durable solutions that deliver customer value with urgency and care.</li>\n</ul>\n<ul>\n<li>We’re here to serve small and medium businesses. Gusto has a strong mission-driven culture, and we care deeply about lifting up these business owners , building technology for an AI-first world that gives them the superpowers to run and grow their businesses with confidence.</li>\n</ul>\n<ul>\n<li>We’re comfortable with change. Our environment moves fast, and PMs here thrive in ambiguity , blending curiosity, experimentation, and AI-native craft to shape how products (and work itself) are built at Gusto.</li>\n</ul>\n<p>About the Team:</p>\n<p>We’re looking for a Principal Product Manager to lead Growth for Gusto Pro, our product specifically built for Accountants. Accountants play a critical role within the Gusto ecosystem. They are one of our largest user groups, but also act as a one-to-many growth engine by referring their small business clients to use Gusto as their payroll provider, generating a substantial portion of our total revenue.</p>\n<p>Here’s what you’ll do day-to-day:</p>\n<ul>\n<li>Ownership: Lead product discovery to uncover/deeply understand customer problems and test risky assumptions</li>\n<li>North Star: Create a long-term vision and strategy that defines the big problems we can solve well with durable competitive advantage</li>\n<li>Collaborate: Partner closely with Engineering, Design, and Data Science on all stages of the product development process, and the Revenue teams to co-create hypotheses and support a sales and product-driven growth engine</li>\n<li>Run A/B tests and lean experiments to quickly prove out hypotheses and test new concepts, always looking for ways to get stronger signal on new ideas</li>\n<li>Rapidly execute and iterate with an emphasis on user delight, impact, and learning. Regularly take risks and make calculated tradeoffs</li>\n<li>Define, measure, and improve key product and business metrics</li>\n</ul>\n<p>Here’s what we&#39;re looking for:</p>\n<ul>\n<li>8+ years of hands-on Product Management experience</li>\n<li>Proven track record of leading growth teams and running clean A/B tests that deliver impactful results</li>\n<li>The intangibles: Natural curiosity, grit, customer obsession, and humility</li>\n<li>Strong product discovery and analytics skills</li>\n<li>The ability to tell a compelling story that drives alignment and inspires action</li>\n<li>Strong collaboration skills; ability to quickly build trust with Revenue and Product teams</li>\n<li>A passion for helping small and medium size businesses</li>\n<li>A strong POV on building products in the age of AI; experience building AI-driven features is a big bonus</li>\n</ul>\n<p>If you don&#39;t think you meet all of the criteria above but still are interested in the job, please apply. Nobody checks every box, and we&#39;re looking for someone excited to join the team.</p>\n<p>Our cash compensation amount for this role is $179,000/yr to $224,000/yr in Denver &amp; most major metro locations, and $210,000/yr to $263,000/yr for San Francisco, Seattle &amp; New York. Final offer amounts are determined by multiple factors including candidate location, experience and expertise and may vary from the amounts listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_403df3ce-795","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gusto","sameAs":"https://www.gusto.com/","logo":"https://logos.yubhub.co/gusto.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gusto/jobs/7354943","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$179,000/yr to $224,000/yr in Denver & most major metro locations, and $210,000/yr to $263,000/yr for San Francisco, Seattle & New York","x-skills-required":["Product Management","Growth","Accountants","Customer Problems","Product Discovery","A/B Testing","Lean Experiments","User Delight","Impact","Learning","Natural Curiosity","Grit","Customer Obsession","Humility","Strong Product Discovery","Analytics Skills","Compelling Storytelling","Collaboration","Trust Building","Passion for Small and Medium Size Businesses","Strong POV on Building Products in the Age of AI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:21.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Product Management","industry":"Technology","skills":"Product Management, Growth, Accountants, Customer Problems, Product Discovery, A/B Testing, Lean Experiments, User Delight, Impact, Learning, Natural Curiosity, Grit, Customer Obsession, Humility, Strong Product Discovery, Analytics Skills, Compelling Storytelling, Collaboration, Trust Building, Passion for Small and Medium Size Businesses, Strong POV on Building Products in the Age of AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":179000,"maxValue":263000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a966b1bf-e76"},"title":"Staff Software Engineer, Compute Infrastructure","description":"<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>\n<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>\n<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>\n<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>\n<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>\n<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>\n<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>\n<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>\n<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>\n</ul>\n<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a966b1bf-e76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4603505006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["Go","REST/gRPC","Distributed databases","Kubernetes orchestration","API development","Infrastructure services","Automation","Inventory management","Lifecycle management","CI/CD pipelines","Hardware compliance","Firmware management","Data systems","Observability","Production incident response","Continuous service improvement"],"x-skills-preferred":["Kafka","ClickHouse","CRDB","DMTF","RedFish APIs","GPU servers"],"datePosted":"2026-04-18T15:53:06.173Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d70a8194-b84"},"title":"Software Engineer, Machine Learning","description":"<p>We are seeking a versatile and experienced Machine Learning / AI Engineer to join our growing AI team, working at the intersection of applied machine learning, infrastructure, and product innovation. Your work will drive user productivity, shape new product experiences, and advance the state of AI at Figma.</p>\n<p>As a Machine Learning / AI Engineer, you will design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features. You will also build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</p>\n<p>You will collaborate closely with engineers, researchers, designers, and product managers across multiple teams to deliver high-quality ML-driven features and infrastructure. This is a high-impact, cross-functional role where you will shape both foundational systems and user-facing capabilities.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features.</li>\n<li>Build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</li>\n<li>Collaborate with AI researchers to iterate on datasets, evaluation metrics, and model architectures to improve quality and relevance.</li>\n<li>Work with product engineers to define and deliver impactful AI features across Figma&#39;s platform.</li>\n<li>Partner with infrastructure engineers to develop and optimize systems for training, inference, monitoring, and deployment.</li>\n<li>Explore new ideas at the edge of what&#39;s technically possible and help shape the long-term AI vision at Figma.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of industry experience in software engineering, with 3+ years focused on applied machine learning or AI.</li>\n<li>Strong experience with end-to-end ML model development, including training, evaluation, deployment, and monitoring.</li>\n<li>Proficiency in Python and familiarity with ML libraries like PyTorch, TensorFlow, Scikit-learn, Spark MLlib, or XGBoost.</li>\n<li>Experience designing and building scalable data and annotation pipelines, as well as evaluation systems for AI model quality.</li>\n<li>Experience mentoring or leading others and contributing to a culture of technical excellence and innovation.</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Familiarity with search relevance, ranking, NLP, or RAG systems.</li>\n<li>Experience with AI infrastructure and MLOps, including observability, CI/CD, and automation for ML workflows.</li>\n<li>Experience working on creative or design-focused ML applications.</li>\n<li>Knowledge of additional languages such as C++ or Go is a plus, but not required.</li>\n<li>A product mindset with the ability to tie technical work to user outcomes and business impact.</li>\n<li>Strong collaboration and communication skills, especially when working across functions (engineering, product, research).</li>\n</ul>\n<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d70a8194-b84","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5551532004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$153,000-$376,000 USD","x-skills-required":["Machine Learning","AI","Python","PyTorch","TensorFlow","Scikit-learn","Spark MLlib","XGBoost","Data Pipelines","Annotation Systems","Human-in-the-loop Workflows"],"x-skills-preferred":["Search Relevance","Ranking","NLP","RAG Systems","AI Infrastructure","MLOps","Observability","CI/CD","Automation","Creative or Design-Focused ML Applications"],"datePosted":"2026-04-18T15:53:04.257Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, AI, Python, PyTorch, TensorFlow, Scikit-learn, Spark MLlib, XGBoost, Data Pipelines, Annotation Systems, Human-in-the-loop Workflows, Search Relevance, Ranking, NLP, RAG Systems, AI Infrastructure, MLOps, Observability, CI/CD, Automation, Creative or Design-Focused ML Applications","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153000,"maxValue":376000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2b84e18-355"},"title":"Enterprise Account Executive - Expand - Southeast","description":"<p>We&#39;re looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion within strategic Enterprise accounts. You&#39;ll be the owner of a defined territory where you&#39;ll build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>This role sits at the intersection of sales execution, technical fluency, and cross-functional collaboration,and is critical to our growth in the Enterprise segment.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Own your territory &amp; build pipeline: Develop and execute a proactive outbound cadence (email, call, social) that generates ≥50 % of your booked opportunities.</li>\n<li>Deep discovery &amp; qualification: Uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals.</li>\n<li>Value storytelling &amp; demos: Craft and deliver tailored narratives and live demos that map Elastic’s Search, Observability, and Security capabilities to measurable business outcomes.</li>\n<li>Mutual deal strategy &amp; forecast accuracy: Collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90 % forecast accuracy within ±10 %.</li>\n<li>Executive negotiation &amp; closing: Lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments.</li>\n<li>Domain &amp; cloud acumen: Position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</li>\n<li>Cross-functional partnership: Work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</li>\n</ul>\n<p><strong>What You Bring:</strong></p>\n<ul>\n<li>Proven SaaS quota-carrying success: 5+ years closing complex Enterprise deals, consistently overachieving targets in a consumption-based or usage-model environment.</li>\n<li>Expert discovery &amp; qualification skills: Demonstrated ability to apply MEDDPICC or equivalent frameworks to drive disciplined pipeline and eliminate low-probability deals.</li>\n<li>Compelling value storytellers: Track record of delivering executive-level presentations and demos that tie product capabilities to real dollars saved, revenue gained, or risk mitigated.</li>\n<li>Strong negotiation chops: History of landing multi-year, high-ACV contracts while protecting margin and securing executive stakeholder buy-in.</li>\n<li>Technical &amp; cloud fluency: Comfortable discussing a broad range of technical topics including observability, security, vector/traditional search, and cloud cost optimization.</li>\n<li>Collaborative mindset &amp; coachability: A learner who partners effectively with internal teams, incorporates feedback, and embodies Elastic’s values of community and openness.</li>\n<li>Open Source enthusiasm: Genuine appreciation for open-source communities and the Elastic model,bonus if you’ve sold or advocated in an OSS context.</li>\n</ul>\n<p><strong>Bonus Points:</strong></p>\n<ul>\n<li>Prior experience at an open-source or developer-centric infrastructure company.</li>\n<li>Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases.</li>\n</ul>\n<p>If you’re driven to build your own pipeline, master complex deal cycles, and help customers unlock the power of Search AI, we’d love to talk. Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2b84e18-355","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic, the Search AI Company","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7707951","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$113,300-$179,200 USD","x-skills-required":["Proven SaaS quota-carrying success","Expert discovery & qualification skills","Compelling value storytellers","Strong negotiation chops","Technical & cloud fluency"],"x-skills-preferred":["Open Source enthusiasm","Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:53:00.849Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Florida, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Proven SaaS quota-carrying success, Expert discovery & qualification skills, Compelling value storytellers, Strong negotiation chops, Technical & cloud fluency, Open Source enthusiasm, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":113300,"maxValue":179200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c23df84-652"},"title":"Obstetrics Care Provider (CNM or NP - TN License)","description":"<p><strong>Job Summary</strong></p>\n<p>As an Obstetrics Care Provider at Pomelo Care, you will provide direct patient care and clinical oversight that optimizes outcomes for pregnant people and newborns through population-based implementation of evidence-based care.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Be accountable for improving clinical outcomes for empaneled patients, by overseeing their medical care</li>\n<li>Lead daily clinical huddles focused on collaboration across a clinical pod, including RNs, LCSW, and RDs</li>\n<li>Review complex patient cases, develop care plans, and support other members of the clinical team in providing them with evidence-based care</li>\n<li>Monitor adverse events and hold clinical retros to identify any areas for improvement in Pomelo&#39;s protocols</li>\n<li>Lead development and review of evidence-based medical protocols and algorithms related to obstetric and women&#39;s health</li>\n<li>Actively participate in on-call schedules including overnight and on weekends</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Active APP License in Tennessee and active compact RN license</li>\n<li>Extensive obstetric experience (minimum 4 years experience), including treating high-risk patients, as well as some experience caring for infants</li>\n<li>A passion for and demonstrated effectiveness in optimizing evidence-based care and perinatal outcomes</li>\n<li>Experience using data to drive patient engagement, activation, and outcomes</li>\n<li>Experience leading successful teams, with track record of outstanding collaboration and teamwork</li>\n<li>A sense of urgency to improve outcomes coupled with exceptional organization and attention to detail</li>\n<li>A growth mindset with the ability to approach process change and ambiguous situations with enthusiasm, creativity, and accountability</li>\n<li>Facility using multiple tech platforms, with an eagerness for advising about platform improvements and adapting to new systems</li>\n<li>Eager to thrive in a fast-paced, metric-driven environment</li>\n<li>Phenomenal interpersonal and communication skills</li>\n</ul>\n<p><strong>Education and Training</strong></p>\n<ul>\n<li>NP or CNM with significant experience in obstetrics and some experience in infant care</li>\n<li>Active, unrestricted license to practice in TN and willingness to obtain licenses in all US states</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Telehealth and/or remote monitoring experience</li>\n<li>Experience in outpatient or home-based management of higher-risk patients</li>\n</ul>\n<p><strong>Why You Should Join Our Team</strong></p>\n<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. And you&#39;ll learn, grow, be challenged, and have fun with your team while doing it.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c23df84-652","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pomelo Care","sameAs":"https://www.pomelocare.com","logo":"https://logos.yubhub.co/pomelocare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pomelocare/jobs/5421821004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["obstetrics","gynecology","pediatrics","telehealth","remote monitoring","data analysis","team leadership","communication","interpersonal skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:59.853Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Healthcare","industry":"Healthcare","skills":"obstetrics, gynecology, pediatrics, telehealth, remote monitoring, data analysis, team leadership, communication, interpersonal skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e8e9acc0-a63"},"title":"Technical Program Manager, Compute","description":"<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>\n<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>\n<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>\n<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>\n<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>\n<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>\n<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>\n<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>\n<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>\n<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>\n<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>\n<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>\n<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>\n<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>\n<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>\n<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>\n<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>\n<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>\n<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>\n<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e8e9acc0-a63","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5138044008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$290,000-$365,000 USD","x-skills-required":["Technical Program Management","Compute Infrastructure","Cloud Providers","Job Scheduling","Resource Orchestration","Workload Management","GPU or Accelerator Infrastructure","Observability","Capacity Planning"],"x-skills-preferred":["Kubernetes","Slurm","Borg","YARN","Custom Schedulers","Demand Forecasting","Cost Modeling","Hardware Lifecycle Management"],"datePosted":"2026-04-18T15:52:47.770Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Program Management, Compute Infrastructure, Cloud Providers, Job Scheduling, Resource Orchestration, Workload Management, GPU or Accelerator Infrastructure, Observability, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":290000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a879854-4d5"},"title":"Manager, Software Engineering - Creation Engine","description":"<p>We&#39;re growing our team of passionate creatives and builders on a mission to make design accessible to all.</p>\n<p>Our Creation Engine teams work on some of the core technologies that power our real-time, browser-based Figma Design and FigJam products. These teams work mostly (but not exclusively) on client-side code that runs in the browser.</p>\n<p>Under the hood, Figma shares a lot of similarities with a game engine. We develop this C++/WebAssembly engine to ensure that internal and external developers can rapidly build new products and features that are fast and reliable by default.</p>\n<p>This team supports a broad scope of testing and logging frameworks used widely across Figma. This includes innovative performance testing tools with careful attention to signal-to-noise ratio as well as deep investment in observability systems.</p>\n<p>As a Manager, Software Engineering - Creation Engine, you will:</p>\n<p>Manage and support a team of experienced engineers to deliver best-in-class testing and observability frameworks for Figma client developers Partner with product, data science, and engineering leadership to set strategy, priorities and mission for teams and projects Roll up your sleeves as needed to get involved in the technical details and operational strategy Engage on broader company programs to up-level the team’s work on performance &amp; quality Build and support a culture of doing great work together for our engineering team by investing in team culture, mentorship, and meaningful work Grow your career in a collaborative and creative engineering community</p>\n<p>We’d love to hear from you if you have:</p>\n<p>4+ years of engineering management experience leading high-output, high-performing teams, with 4+ years as a hands-on engineer Proven leadership in building, mentoring, and motivating senior engineers while maintaining a high technical bar and recruiting top talent Deeply passionate about the testing, observability, and tooling space, with proven hands-on or leadership experience driving impact in these areas Demonstrated success delivering scalable, high-quality work and driving cross-functional initiatives in fast-paced, ambiguous environments Empathetic leader with strong organizational and execution skills, enabling platform teams that support and accelerate many others</p>\n<p>While it’s not required, it’s an added plus if you also have:</p>\n<p>Deep technical knowledge in the relevant domains, with existing understanding of best practices for testing and observability Experience managing other managers and growing teams Experience advocating for and successfully adopting a performance-mindset culture across an entire company</p>\n<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you’re excited about this role but your past experience doesn’t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>\n<p>Pay Transparency Disclosure If based in Figma’s San Francisco or New York hub offices, this role has the annual base salary range stated below. Job level and actual compensation will be decided based on factors including, but not limited to, individual qualifications objectively assessed during the interview process (including skills and prior relevant experience, potential impact, and scope of role), market demands, and specific work location. The listed range is a guideline, and the range for this role may be modified. For roles that are available to be filled remotely, the pay range is localized according to employee work location by a factor of between 80% and 100% of range. Please discuss your specific work location with your recruiter for more information.</p>\n<p>Figma offers equity to employees, as well a competitive package of additional benefits, including health, dental &amp; vision, retirement with company contribution, parental leave &amp; reproductive or family planning support, mental health &amp; wellness benefits, generous PTO, company recharge days, a learning &amp; development stipend, a work from home stipend, and cell phone reimbursement. Figma also offers sales incentive pay for most sales roles and an annual bonus plan for eligible non-sales roles. Figma’s compensation and benefits are subject to change and may be modified in the future.</p>\n<p>Annual Base Salary Range: $258,000-$376,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a879854-4d5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5696366004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$258,000-$376,000 USD","x-skills-required":["C++","WebAssembly","Testing","Observability","Performance","Quality","Leadership","Management","Engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:46.768Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, WebAssembly, Testing, Observability, Performance, Quality, Leadership, Management, Engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":258000,"maxValue":376000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0569537-539"},"title":"Staff Backend Engineer, Gitlab Delivery: Upgrades","description":"<p>As a Staff Engineer on the GitLab Delivery - Upgrades team, you&#39;ll guide the technical direction for GitLab&#39;s self-managed deployment strategy so customers can deploy, upgrade, and run GitLab reliably in their own infrastructure with minimal disruption.</p>\n<p>You&#39;ll serve as a technical anchor for the team, working closely with your engineering manager, product manager, and partners across Site Reliability Engineering, Release, Security, and Development to shape cloud-native, operator-driven deployment patterns that reduce operational complexity and upgrade friction.</p>\n<p>In your first year, you&#39;ll help define the architecture for zero-downtime upgrades, strengthen observability and reliability practices, and guide the next generation of deployment automation for self-managed GitLab environments.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>Evolving GitLab Operator and Helm charts to support zero-downtime upgrades for complex, stateful GitLab installations</li>\n</ul>\n<ul>\n<li>Advancing the GitLab Environment Toolkit to simplify large-scale, production-ready self-managed deployments</li>\n</ul>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Guide the technical vision and architecture for GitLab&#39;s cloud-native, self-managed deployments and upgrade workflows.</li>\n</ul>\n<ul>\n<li>Establish operational maturity standards, service integration patterns, and deployment models that help development teams manage the lifecycle of their components.</li>\n</ul>\n<ul>\n<li>Design and maintain Kubernetes Operators, Helm charts, and upgrade orchestration tooling for self-managed GitLab deployments across varied environments.</li>\n</ul>\n<ul>\n<li>Develop automation and integration frameworks for database migrations, rolling deployments, compatibility checks, and rollback paths.</li>\n</ul>\n<ul>\n<li>Define database and application lifecycle strategies, including safe PostgreSQL migration approaches and validation mechanisms that reduce downtime risk.</li>\n</ul>\n<ul>\n<li>Work with Product Management, GitLab.com Site Reliability Engineering, GitLab Dedicated, and development teams to align deployment patterns with customer needs.</li>\n</ul>\n<ul>\n<li>Mentor engineers and enable customer-facing teams through design reviews, code reviews, documentation, and runbooks.</li>\n</ul>\n<ul>\n<li>Drive observability, testing, performance, and resilience practices for self-managed deployments, and contribute to incident response and post-incident learning.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong software engineering experience designing and delivering production systems that customers install and operate in their own infrastructure.</li>\n</ul>\n<ul>\n<li>Proficiency in Go for large, complex codebases, with familiarity with Ruby on Rails and Rails application architecture as a useful addition.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Kubernetes in production, including building and maintaining Operators, designing Helm charts for stateful applications, and working with Custom Resource Definitions, admission controllers, and controller patterns.</li>\n</ul>\n<ul>\n<li>Knowledge of cloud-native systems and tooling, such as service mesh, observability stacks, infrastructure as code, and automation tools like Terraform or Ansible.</li>\n</ul>\n<ul>\n<li>Experience with stateful workloads and databases, including PostgreSQL schema design and migrations, persistent volumes, storage classes, and approaches for reducing downtime during upgrades.</li>\n</ul>\n<ul>\n<li>Understanding of Linux systems and production operations, including package management, systemd, system-level debugging, observability, incident response, and on-call participation.</li>\n</ul>\n<ul>\n<li>Ability to guide through influence, including writing clear technical proposals, documenting decisions, mentoring engineers, and working effectively across teams.</li>\n</ul>\n<ul>\n<li>Interest in open source infrastructure or deployment tooling, or transferable experience from adjacent domains, with the ability to explain technical concepts clearly to different audiences.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Delivery - Upgrades team sits within GitLab Delivery and focuses on delivering GitLab to self-managed users through supported, validated deployment tooling. We own and evolve the GitLab Omnibus package, Helm charts, GitLab Operator, and the GitLab Environment Toolkit, and we work asynchronously across regions with partners in Site Reliability Engineering, Release, Security, and Development.</p>\n<p>Our work centers on enabling zero-downtime upgrades, reducing operational complexity at scale, supporting GitLab’s cloud-native transition while continuing to serve existing deployments, and improving the upgrade experience for customers running GitLab in diverse environments.</p>\n<p>For more on how we work, see [Link: Team Handbook Page].</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0569537-539","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8463922002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Ruby on Rails","Kubernetes","Cloud-native systems","Service mesh","Observability stacks","Infrastructure as code","Automation tools","Linux systems","Production operations","Package management","Systemd","System-level debugging","Incident response","On-call participation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:40.073Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Ruby on Rails, Kubernetes, Cloud-native systems, Service mesh, Observability stacks, Infrastructure as code, Automation tools, Linux systems, Production operations, Package management, Systemd, System-level debugging, Incident response, On-call participation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09e766cb-2a4"},"title":"Software Engineer, Enterprise Integrations","description":"<p>Aboutfrica</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request.</p>\n<p>Available Locations: Austin Texas</p>\n<p>About the Department</p>\n<p>Cloudflare&#39;s Enterprise Integrations Engineering Team designs, builds, and maintains integrations across a wide range of SaaS applications used throughout the organization. Our mission is to create scalable, reliable, and maintainable systems that ensure data flows securely and efficiently between platforms.</p>\n<p>What You&#39;ll Do</p>\n<p>We&#39;re looking for a software engineer to join our Enterprise Integrations Team. You&#39;ll work on building and maintaining integration workflows between Cloudflare and a variety of SaaS applications. This includes taking work from concept through implementation, including gathering requirements, writing technical specifications, development, testing, and deployment. You&#39;ll collaborate closely with internal teams to ensure integrations meet business needs and are built following engineering best practices. As you grow in the role, you&#39;ll have the opportunity to lead larger initiatives and own projects from end to end.</p>\n<p>Qualifications &amp; Skills Required:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field, or equivalent work experience</li>\n<li>Minimum of 5 years of professional experience as a software engineer</li>\n<li>Experience working with internal stakeholders to solve business problems through integration solutions</li>\n<li>Proficiency in Golang</li>\n<li>Experience building RESTful APIs with proper service security practices</li>\n<li>Experience working with observability tools such as Grafana, Prometheus, Sentry, or Kibana</li>\n<li>Experience with Kubernetes</li>\n<li>Experience with GitLab or other CI/CD tools</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience working with ERP systems such as Oracle or NetSuite</li>\n<li>Experience working in an Agile Scrum environment</li>\n<li>Familiarity with tools like Jira and Confluence</li>\n<li>Familiarity with integration patterns such as pub/sub, CDM (Common Data Model), and batch processing</li>\n<li>Experience working with PostgreSQL</li>\n<li>Experience with Cloudflare Developer’s Platform</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09e766cb-2a4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7336735","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","RESTful APIs","Observability tools","Kubernetes","GitLab"],"x-skills-preferred":["ERP systems","Agile Scrum","Jira","Confluence","Integration patterns","PostgreSQL","Cloudflare Developer’s Platform"],"datePosted":"2026-04-18T15:52:36.450Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, RESTful APIs, Observability tools, Kubernetes, GitLab, ERP systems, Agile Scrum, Jira, Confluence, Integration patterns, PostgreSQL, Cloudflare Developer’s Platform"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9c1bbf0d-969"},"title":"Backend Engineer","description":"<p>We&#39;re looking for a skilled Backend Engineer to join our team. As a Backend Engineer, you will work on xAI&#39;s production systems that power the API. You will design, implement, and maintain reliable and horizontally scalable distributed systems. Our backend infrastructure is written in Rust, so familiarity with a compiled language such as C++, Rust, or Go is highly beneficial.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain reliable and horizontally scalable distributed systems</li>\n<li>Work closely with the team to identify and solve pain points</li>\n<li>Collaborate with the team to ensure high-quality code and architecture</li>\n<li>Participate in code reviews and contribute to the improvement of the codebase</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Expert knowledge of either Rust or C++</li>\n<li>Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems</li>\n<li>Knowledge of service observability and reliability best practices</li>\n<li>Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Knowledge of Python</li>\n<li>Experience with Docker, Kubernetes, and containerized applications</li>\n<li>Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping)</li>\n<li>Hands-on experience with LLM APIs, embeddings, or RAG patterns</li>\n<li>Track record of delivering user-facing software at scale</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9c1bbf0d-969","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://x.ai/","logo":"https://logos.yubhub.co/x.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4991448007","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","C++","Distributed Systems","Service Observability","Database Management"],"x-skills-preferred":["Python","Docker","Kubernetes","gRPC","LLM APIs"],"datePosted":"2026-04-18T15:52:25.607Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, C++, Distributed Systems, Service Observability, Database Management, Python, Docker, Kubernetes, gRPC, LLM APIs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32f598af-109"},"title":"Solution Architect","description":"<p>Are you looking to make a real impact and play a meaningful role in the growth of our company? As a Solutions Architect (SA) at Elastic, you will serve as a technical authority and trusted advisor to our sales team, customers, partners, and community. You will understand and solve our customers&#39; business issues with the Elastic Stack, and engage the regional Elastic community through events and programs.</p>\n<p>Your key responsibilities will include:</p>\n<ul>\n<li>Serving as the technical point of contact for some of our clients</li>\n<li>Developing a deep understanding of customers&#39; goals and objectives, and articulating how our offerings address their needs</li>\n<li>Creating and owning value-based relationships at all levels in customer organisations</li>\n<li>Actively participating in all phases of planning and execution for your territory, from initial discovery to the technical win</li>\n<li>Developing and maintaining a deep understanding of the Elastic products and solutions to demonstrate the value of our offerings in sales meetings and at events such as meetups and conferences</li>\n<li>Advising the sales team on effective ways of positioning Elastic products, solutions, and services</li>\n<li>Onboarding, educating, and enabling our partners, and supporting them in sales cycles</li>\n<li>Creating collateral, contributing to programs, and collaborating with other Elasticians to meet individual client needs</li>\n<li>Being the voice of the customer and community to communicate needs, gaps, and enhancements to our engineering and leadership teams</li>\n</ul>\n<p>In return, we offer a competitive salary and benefits package, including health coverage, flexible locations and schedules, generous vacation days, and a matching program for financial donations and service.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32f598af-109","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7346237","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI enterprise search","observability","cybersecurity","technical presales","customer relations"],"x-skills-preferred":["continuous learning","influencing","inspiring groups"],"datePosted":"2026-04-18T15:52:18.173Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI enterprise search, observability, cybersecurity, technical presales, customer relations, continuous learning, influencing, inspiring groups"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_709b405a-48b"},"title":"Staff / Senior Software Engineer, AI Reliability","description":"<p>We&#39;re seeking a Staff / Senior Software Engineer, AI Reliability to join our team. As a key member of our AIRE (AI Reliability Engineering) team, you will partner with teams across Anthropic to improve reliability across our most critical serving paths. You will develop Service Level Objectives for large language model serving systems, design and implement monitoring and observability systems, assist in the design and implementation of high-availability serving infrastructure, lead incident response for critical AI services, and support the reliability of safeguard model serving.</p>\n<p>You may be a good fit for this role if you have strong distributed systems, infrastructure, or reliability backgrounds, are curious and brave, think holistically about how systems compose and where the seams are, can build lasting relationships across teams, care about users and feel ownership over outcomes, have excellent communication and collaboration skills, and bring diverse experience.</p>\n<p>Strong candidates may also have experience operating large-scale model serving or training infrastructure, experience with one or more ML hardware accelerators, understanding of ML-specific networking optimizations, expertise in AI-specific observability tools and frameworks, experience with chaos engineering and systematic resilience testing, and contributions to open-source infrastructure or ML tooling.</p>\n<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. We value impact and believe that the highest-impact AI research will be big science. We work as a single cohesive team on just a few large-scale research efforts and value communication skills.</p>\n<p>If you&#39;re interested in this role, please submit an application even if you don&#39;t believe you meet every single qualification. We encourage diversity and strive to include a range of diverse perspectives on our team.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_709b405a-48b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5113224008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$325,000-$485,000 USD","x-skills-required":["distributed systems","infrastructure","reliability","Service Level Objectives","monitoring and observability systems","high-availability serving infrastructure","incident response","safeguard model serving"],"x-skills-preferred":["large-scale model serving or training infrastructure","ML hardware accelerators","ML-specific networking optimizations","AI-specific observability tools and frameworks","chaos engineering and systematic resilience testing","open-source infrastructure or ML tooling"],"datePosted":"2026-04-18T15:52:16.313Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, infrastructure, reliability, Service Level Objectives, monitoring and observability systems, high-availability serving infrastructure, incident response, safeguard model serving, large-scale model serving or training infrastructure, ML hardware accelerators, ML-specific networking optimizations, AI-specific observability tools and frameworks, chaos engineering and systematic resilience testing, open-source infrastructure or ML tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_15a29cc3-0bf"},"title":"Senior Production Engineer","description":"<p>CORPORATION</p>\n<p>CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.</p>\n<p><strong>About the Role</strong></p>\n<p>Production Engineering ensures CoreWeave’s cloud delivers world-class reliability, performance, and operational excellence. We are hiring a Senior Production Engineer to take direct, hands-on ownership of critical tooling that drives reliability and delivery success.</p>\n<p>In this role, you will work broadly across the cloud stack designing, implementing, deploying, and operating systems that improve delivery velocity, service availability, and operational safety. You’ll be responsible for leading end-to-end technical projects, maintaining long-lived systems the team owns, and strengthening our operational foundations through durable engineering investments.</p>\n<p>This is a role for someone who enjoys building, debugging, and operating production systems. You will collaborate closely with service owners, but your primary impact comes from the reliability, quality, and maturity of the systems you deliver and maintain over time.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Take hands-on ownership of critical systems and frameworks, driving their architecture, implementation, and long-term evolution.</li>\n</ul>\n<ul>\n<li>Lead end-to-end delivery of engineering projects that improve availability, scalability, operational automation, and failure recovery.</li>\n</ul>\n<ul>\n<li>Build and maintain observability, alerting, automated remediation, and resilience testing for the systems you support.</li>\n</ul>\n<ul>\n<li>Participate in incident response as a subject-matter expert; drive deep root-cause investigations and implement lasting fixes.</li>\n</ul>\n<ul>\n<li>Improve runbooks, sources of truth, deployment workflows, and operational tooling to harden production readiness.</li>\n</ul>\n<ul>\n<li>Eliminate single points of failure and reduce operational toil through automation, refactors, and system redesigns.</li>\n</ul>\n<ul>\n<li>Ship production code regularly in Python, Go, or similar languages, and participate in on-call rotations.</li>\n</ul>\n<ul>\n<li>Maintain and mature long-term projects and frameworks owned by the team, ensuring they remain reliable, well-instrumented, and easy to operate.</li>\n</ul>\n<ul>\n<li>Collaborate with platform teams to ensure new features and services integrate cleanly with our reliability best-practices and tooling.</li>\n</ul>\n<p><strong>What You’ve Worked On (Minimum Qualifications)</strong></p>\n<ul>\n<li>7+ years of engineering experience building and operating distributed systems or cloud platforms.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to debug complex production issues end-to-end, across services, infrastructure layers, and automation.</li>\n</ul>\n<ul>\n<li>Strong programming or scripting ability (Python, Go, or similar), with experience shipping and operating production services and tools.</li>\n</ul>\n<ul>\n<li>Deep knowledge of cloud-native technologies and distributed system patterns, particularly Kubernetes.</li>\n</ul>\n<ul>\n<li>Experience with modern observability stacks: metrics, tracing, structured logs, SLOs/SLIs, and incident lifecycle practices.</li>\n</ul>\n<ul>\n<li>A track record of successfully delivering hands-on reliability improvements through engineering execution.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience building internal tooling, frameworks, or automation that supports high-availability cloud operations.</li>\n</ul>\n<ul>\n<li>Familiarity with DR/BCP, service tiering, capacity planning, or chaos engineering.</li>\n</ul>\n<ul>\n<li>Background operating or building large-scale AI or GPU-accelerated infrastructure.</li>\n</ul>\n<ul>\n<li>Experience maintaining multi-year ownership of foundational production systems.</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $139,000 to $204,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_15a29cc3-0bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4670172006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $204,000","x-skills-required":["cloud computing","distributed systems","cloud platforms","Kubernetes","observability stacks","metrics","tracing","structured logs","SLOs/SLIs","incident lifecycle practices","Python","Go","programming","scripting","production services","tools"],"x-skills-preferred":["internal tooling","frameworks","automation","high-availability cloud operations","DR/BCP","service tiering","capacity planning","chaos engineering","large-scale AI","GPU-accelerated infrastructure"],"datePosted":"2026-04-18T15:52:09.786Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, distributed systems, cloud platforms, Kubernetes, observability stacks, metrics, tracing, structured logs, SLOs/SLIs, incident lifecycle practices, Python, Go, programming, scripting, production services, tools, internal tooling, frameworks, automation, high-availability cloud operations, DR/BCP, service tiering, capacity planning, chaos engineering, large-scale AI, GPU-accelerated infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":204000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd44a200-1ac"},"title":"Director of Engineering (Service Foundations)","description":"<p>Job Title: Director of Engineering (Service Foundations)</p>\n<p>We are seeking a seasoned Director of Engineering to lead our Service Foundations team. As a key member of our executive engineering team, you will be responsible for building and operating distributed systems, driving company-wide efficiency, reliability, and automation.</p>\n<p>In this role, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical infrastructure initiatives that integrate AI-driven tooling directly into the infrastructure itself to make it more adaptive, scalable, and intelligent.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Solve real business needs at a large scale by applying your software engineering expertise</li>\n<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>\n<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>\n<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>\n<li>Lead and participate in technical, product, and design discussions</li>\n<li>Build, manage, and operate highly scalable services in the cloud</li>\n<li>Grow leaders on the team by providing coaching, mentorship, and growth opportunities</li>\n<li>Partner with other engineering and product leaders on planning, prioritisation, and staffing</li>\n<li>Create a culture of excellence on the team while leading with empathy</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>20+ years of industry experience building and operating large-scale distributed systems</li>\n<li>Proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads</li>\n<li>Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions</li>\n<li>Ability to translate requirements from internal engineering teams into clear priorities and execution plans</li>\n<li>Fluent across the infrastructure stack , storage, orchestration, observability, and developer platforms , with intuition for how these layers interact</li>\n<li>Ability to evaluate and evolve abstractions , knows when to unify, when to localise, and how to reduce cognitive load for product teams</li>\n<li>BS in Computer Science (Masters or PhD preferred)</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratise data, analytics, and AI.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd44a200-1ac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8201768002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud infrastructure systems","Distributed systems","Infrastructure as Code","Containerisation","Orchestration","Observability","Developer platforms"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:06.064Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure systems, Distributed systems, Infrastructure as Code, Containerisation, Orchestration, Observability, Developer platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fa9a54d7-549"},"title":"Senior Site Reliability Engineer, Data Infrastructure","description":"<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>\n<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>\n<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>\n<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>\n<p>About the role:</p>\n<ul>\n<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>\n<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>\n<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>\n<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>\n<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>\n<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>\n<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>\n<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>\n<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>\n<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>\n<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>\n<li>Background in building internal developer platforms or self-service infrastructure</li>\n</ul>\n<p>Wondering if you’re a good fit?</p>\n<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>\n<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>\n<ul>\n<li>You love building highly reliable systems that operate at scale</li>\n<li>You’re curious about how to continuously improve system resilience, security, and operations</li>\n<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>\n<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>\n<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>\n<p>Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>\n<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>\n<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>\n<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information.</p>\n<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>\n<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>\n<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>\n<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>\n<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fa9a54d7-549","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4671535006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Kubernetes","containerized software services","cluster design","operations","troubleshooting","CI/CD systems","Argo CD","GitHub Actions","production systems","high availability","incident response","SLI/SLO/SLA definition","error budgets","postmortems","geo-replicated","multi-region","active-active systems","traffic routing","failover strategies","data consistency tradeoffs","observability components","metrics","logging","tracing","Prometheus","Grafana","OpenTelemetry","infrastructure as code","Helm","Terraform","Pulumi","automated environment provisioning","system performance tuning","capacity planning","resource optimization","distributed systems","security best practices","cloud-native environments","secrets management","network policies","vulnerability scanning"],"x-skills-preferred":["Spark","Airflow","Kafka","Flink","service mesh technologies","Istio","Linkerd","regulated environments","compliance frameworks","GDPR","SOC 2","HIPAA","SOX","internal developer platforms","self-service infrastructure"],"datePosted":"2026-04-18T15:51:59.035Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2ab9c635-07a"},"title":"Operations Engineer, Fleet Reliability","description":"<p>The Fleet Reliability Operations team is responsible for the day-to-day provisioning, management, and uptime of CoreWeave&#39;s ever-expanding fleet of server nodes. This team plays a central role in CoreWeave&#39;s growth strategy, configuring, updating, and remotely troubleshooting our highest-tier supercomputing clusters and their networking, delivery platforms, and tools dependencies.</p>\n<p>We are seeking curious, creative, and persistent problem solvers to join our Fleet Reliability Operations team to help drive batches of server nodes through our provisioning and validation processes while efficiently and effectively troubleshooting node or cluster problems as they arise.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Configuring and maintaining large-scale high-performance supercomputing clusters running state-of-the-art GPUs</li>\n<li>Troubleshooting hardware and software issues; escalating and coordinating as needed with data center, network, hardware, and platform teams to drive resolution</li>\n<li>Monitoring and analyzing system performance and taking appropriate remediation actions for cloud health</li>\n<li>Approaching work with flexibility and optimism, anticipating shifting business and technical priorities</li>\n<li>Creating and maintaining documentation of team processes, knowledge, and best practices for system management</li>\n<li>Thinking critically about day-to-day work and working collaboratively to improve team processes and efficiency</li>\n</ul>\n<p>As a member of our team, you will be part of a dynamic and fast-paced environment where you will have the opportunity to grow and develop your skills. We offer a competitive salary range of $83,000 to $110,000, as well as a comprehensive benefits package, including medical, dental, and vision insurance, company-paid life insurance, and flexible PTO.</p>\n<p>If you are a motivated and detail-oriented individual who is passionate about working with cutting-edge technology, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2ab9c635-07a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4617382006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$83,000 to $110,000","x-skills-required":["Linux system administration","Troubleshooting hardware and software issues","System maintenance tasks","Scripting languages (bash, python, powershell, etc)","Grafana, Prometheus, promsql queries or similar observability platforms"],"x-skills-preferred":["Kubernetes administration","HPC - administering GPU-related workloads","Data center environments including server racks, HVAC systems, fiber trays"],"datePosted":"2026-04-18T15:51:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY /Plano, TX /  Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux system administration, Troubleshooting hardware and software issues, System maintenance tasks, Scripting languages (bash, python, powershell, etc), Grafana, Prometheus, promsql queries or similar observability platforms, Kubernetes administration, HPC - administering GPU-related workloads, Data center environments including server racks, HVAC systems, fiber trays","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":83000,"maxValue":110000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1ba5c28-9ce"},"title":"Senior Software Engineer, Observability","description":"<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>\n<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>\n<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>\n<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>\n<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>\n<p>The base salary range for this role is $139,000 to $220,000.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1ba5c28-9ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4554201006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $220,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","Helm","YAML-based configurations","automated testing","progressive release strategies","on-call rotations"],"x-skills-preferred":["designing, operating, or scaling logging, metrics, or tracing platforms","data streaming systems for observability pipelines","automating infrastructure provisioning","OpenTelemetry for unified telemetry collection and instrumentation","exposure to modern AI workloads and GPU-based infrastructure"],"datePosted":"2026-04-18T15:51:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_272bd1ad-99d"},"title":"Software Engineer, Sandboxing","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s sandboxing infrastructure enables Claude to safely execute code and interact with external systems. As we expand Claude&#39;s capabilities, the reliability, security, and developer experience of this infrastructure becomes increasingly critical. We&#39;re looking for an engineer to join the sandboxing team and help shape both the client-side library/API and the underlying infrastructure.</p>\n<p>In this role, you&#39;ll combine deep infrastructure expertise with an obsession for developer experience. You&#39;ll help maintain and evolve a system that must be correct, performant, and intuitive to use. You&#39;ll work closely with internal teams to understand their needs, burn down errors and edge cases, and build a roadmap that anticipates where the product needs to go. This is a role for someone who finds satisfaction in both the craft of building reliable systems and the empathy required to serve developers and researchers well.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to the client library, API surface, and underlying infrastructure for Anthropic&#39;s sandboxing system, ensuring it is reliable, well-documented, and intuitive to use</li>\n<li>Drive down error rates and improve correctness through systematic debugging, monitoring, and proactive fixes</li>\n<li>Help develop and maintain a product roadmap for sandboxing capabilities, balancing immediate needs with long-term architectural improvements</li>\n<li>Partner closely with internal teams using the sandboxing system to understand their requirements, debug issues, and build tooling that serves their use cases</li>\n<li>Respond to incidents and production issues with urgency, conducting thorough root cause analysis and implementing preventive measures</li>\n<li>Build comprehensive testing, observability, and documentation to ensure the system meets a high quality bar</li>\n<li>Collaborate across the sandboxing team, flexing between client-side and infrastructure work as needed</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 5+ years of software engineering experience, with meaningful time spent maintaining libraries, SDKs, or developer-facing APIs</li>\n<li>Obsess over developer experience,you&#39;ve thought deeply about API design, error propagation, documentation, and the small details that make a library feel well-crafted</li>\n<li>Have experience operating complex distributed systems</li>\n<li>Bring a track record of systematically improving reliability,you&#39;ve burned down error budgets, built monitoring, and driven issues to resolution</li>\n<li>Can develop and articulate a long-term vision for a product, translating user feedback and technical constraints into a coherent roadmap</li>\n<li>Are comfortable with ambiguity and can context-switch between reactive incident work and proactive product development</li>\n<li>Communicate clearly with both technical and non-technical stakeholders</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience as a founder or early engineer at an infrastructure-focused startup, where you owned a product end-to-end</li>\n<li>Background in security, sandboxing, or isolation technologies (containers, VMs, seccomp, namespaces, etc.)</li>\n<li>Open-source contributions in the Python ecosystem</li>\n<li>Experience building developer tools, CLIs, or platforms used by other engineers</li>\n<li>History of working on incident response and on-call rotations for production systems</li>\n<li>Exposure to reinforcement learning or model training infrastructure</li>\n</ul>\n<p><strong>Representative Projects</strong></p>\n<p>These are examples of past work that would indicate a good fit,not a description of the role itself:</p>\n<ul>\n<li>Maintaining an open source SDK through multiple major version upgrades while minimizing breaking changes for users</li>\n<li>Leading an initiative to reduce P0 incidents by XX% through improved error handling, retries, and observability</li>\n<li>Building a developer platform at a startup from zero to product-market fit, iterating based on user feedback</li>\n<li>Embedding with an internal team for a quarter to deeply understand their workflows and shipping targeted improvements to a piece of infrastructure they rely on</li>\n<li>Developing a multi-quarter roadmap for a developer tools product, balancing user requests with technical debt reduction</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_272bd1ad-99d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5083039008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["software engineering","infrastructure expertise","developer experience","API design","error propagation","documentation","distributed systems","complex systems","reliability","monitoring","root cause analysis","preventive measures","testing","observability","collaboration","communication"],"x-skills-preferred":["founder","early engineer","security","sandboxing","isolation technologies","open-source contributions","developer tools","incident response","on-call rotations","reinforcement learning","model training infrastructure"],"datePosted":"2026-04-18T15:51:53.000Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure expertise, developer experience, API design, error propagation, documentation, distributed systems, complex systems, reliability, monitoring, root cause analysis, preventive measures, testing, observability, collaboration, communication, founder, early engineer, security, sandboxing, isolation technologies, open-source contributions, developer tools, incident response, on-call rotations, reinforcement learning, model training infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a22d0242-435"},"title":"Enterprise Account Executive - Expand - Southeast","description":"<p>We&#39;re looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion within strategic Enterprise accounts. You&#39;ll be the owner of a defined territory where you&#39;ll build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>You&#39;ll be responsible for developing and executing a proactive outbound cadence that generates ≥50 % of your booked opportunities. You&#39;ll uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals. You&#39;ll craft and deliver tailored narratives and live demos that map Elastic&#39;s Search, Observability, and Security capabilities to measurable business outcomes.</p>\n<p>You&#39;ll collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90 % forecast accuracy within ±10 %. You&#39;ll lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments. You&#39;ll position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</p>\n<p>You&#39;ll work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</p>\n<p>To be successful in this role, you&#39;ll need to have proven SaaS quota-carrying success, expert discovery and qualification skills, compelling value storytelling abilities, strong negotiation chops, and technical and cloud fluency. You&#39;ll also need to be a collaborative mindset and a coachable individual who embodies Elastic&#39;s values of community and openness.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a22d0242-435","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic, the Search AI Company","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7792993","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$113,300-$179,200 USD","x-skills-required":["SaaS quota-carrying success","Expert discovery and qualification skills","Compelling value storytelling abilities","Strong negotiation chops","Technical and cloud fluency"],"x-skills-preferred":["Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:51:49.343Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Georgia, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"SaaS quota-carrying success, Expert discovery and qualification skills, Compelling value storytelling abilities, Strong negotiation chops, Technical and cloud fluency, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":113300,"maxValue":179200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_84de6292-2ef"},"title":"Obstetrics Care Provider (CNM or NP) - Multiple Schedules","description":"<p><strong>Job Summary</strong></p>\n<p>As an Obstetrics Care Provider at Pomelo Care, you will provide direct patient care and clinical oversight to optimize outcomes for pregnant people and newborns. You will work closely with a clinical pod, including RNs, LCSWs, and RDs, to develop care plans and support other members of the team.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Provide direct patient care and clinical oversight to optimize outcomes for pregnant people and newborns</li>\n<li>Oversee the medical care of empaneled patients and attend daily clinical huddles</li>\n<li>Review complex patient cases, develop care plans, and support other members of the clinical team</li>\n<li>Monitor adverse events and hold clinical retros to identify areas for improvement in Pomelo&#39;s protocols</li>\n<li>Lead the development and review of evidence-based medical protocols and algorithms related to obstetric and women&#39;s health</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Active compact RN license</li>\n<li>Minimum of 4 years of experience as an APP</li>\n<li>Extensive obstetric experience, including treating high-risk patients, and some experience caring for infants</li>\n<li>Passion for optimizing evidence-based care and perinatal outcomes</li>\n<li>Experience using data to drive patient engagement, activation, and outcomes</li>\n<li>Experience leading successful teams and collaborating with others</li>\n<li>Ability to approach process change and ambiguous situations with enthusiasm, creativity, and accountability</li>\n</ul>\n<p><strong>Education and Training</strong></p>\n<ul>\n<li>NP or CNM with significant experience in obstetrics and some experience in infant care</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Telehealth and/or remote monitoring experience</li>\n<li>Experience in outpatient or home-based management of higher-risk patients</li>\n</ul>\n<p><strong>Schedule Options</strong></p>\n<ul>\n<li>1.0 FTE: Day Shift - Monday - Friday, 9:00am - 6:00pm ET</li>\n<li>1.0 FTE: Evening Shift - Monday - Friday, 3:00pm - 11:00pm ET</li>\n<li>0.9 FTE: 3x12&#39;s - Rotating weekdays, 11:00am - 11:00pm ET</li>\n<li>0.9 FTE: 3x12&#39;s - Monday, Saturday, Sunday, 9am - 9pm ET</li>\n<li>0.9 FTE: 3x12&#39;s - Monday, Tuesday, Saturday, 9am - 9pm ET</li>\n</ul>\n<p><strong>Why Join Our Team</strong></p>\n<p>By joining Pomelo Care, you will be part of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. You will learn, grow, be challenged, and have fun with your team while doing it.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_84de6292-2ef","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pomelo Care","sameAs":"https://www.pomelocare.com","logo":"https://logos.yubhub.co/pomelocare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pomelocare/jobs/5689384004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Active compact RN license","Minimum of 4 years of experience as an APP","Extensive obstetric experience","Passion for optimizing evidence-based care and perinatal outcomes","Experience using data to drive patient engagement, activation, and outcomes","Experience leading successful teams and collaborating with others"],"x-skills-preferred":["Telehealth and/or remote monitoring experience","Experience in outpatient or home-based management of higher-risk patients"],"datePosted":"2026-04-18T15:51:47.800Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Healthcare","industry":"Healthcare","skills":"Active compact RN license, Minimum of 4 years of experience as an APP, Extensive obstetric experience, Passion for optimizing evidence-based care and perinatal outcomes, Experience using data to drive patient engagement, activation, and outcomes, Experience leading successful teams and collaborating with others, Telehealth and/or remote monitoring experience, Experience in outpatient or home-based management of higher-risk patients"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0396ac1c-dad"},"title":"Senior Staff Engineer, Cloud Economics","description":"<p>Reddit is a community of communities. It&#39;s built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet.</p>\n<p>The Ads Foundations organization is responsible for the technical backbone powering Ads Monetization at scale. Within this ecosystem, efficient resource utilization is critical.</p>\n<p>We are seeking a Senior Staff Engineer to serve as the Cloud Resources Technical Owner for the Ads Domain. You will be the primary engineering point of contact for the Senior Director in Ads and Cloud Operations/Resources (COR &amp; Opex) stakeholders.</p>\n<p><strong>Responsibilities</strong></p>\n<p>Technical Vision &amp; Strategy</p>\n<ul>\n<li>Define and drive the technical strategy for Cloud Resource management within Ad first, ensuring that cost accountability is built into the architecture of our systems.</li>\n<li>High-Fidelity Investment Modeling: Elevate cloud estimation from guesswork to a rigorous engineering discipline. You will lead the high-quality forecasting of new cloud investments and efficiency projects, designing data-driven models to validate technical ROI before builds happen</li>\n<li>Design and implement a roadmap for Cost Observability 2.0, moving beyond simple reporting to real-time, service/team-level spend attribution and automated anomaly detection.</li>\n</ul>\n<p>Engineering &amp; Tooling Leadership</p>\n<ul>\n<li>Design and build internal platforms that programmatically enforce PnL accountability. You will engineer (or collaborate with Core Infrastructure partners) to deliver the dashboards, alerts, and governance tools that every Ads team relies on to manage their cloud footprint.</li>\n<li>Architect automated frameworks for validating cost estimates and forecasting, replacing manual spreadsheets with data-driven software solutions.</li>\n</ul>\n<p>Scale &amp; Optimization</p>\n<ul>\n<li>Fight for observability by instrumenting deep telemetry into our cloud infrastructure. You will be hands-on in identifying inefficiencies (e.g., underutilized clusters, uncompressed data flows) and re-architecting critical paths for cost reduction.</li>\n<li>Lead the technical validation of vendor and 3rd-party tool integration, ensuring we extract maximum engineering value from every dollar spent.</li>\n</ul>\n<p>Cultural &amp; Technical Stewardship</p>\n<ul>\n<li>Act as a role model for the Ads domain and the wider company. You will set the standard for how engineering teams think about Cost as a Non Functional Requirement, eventually scaling these patterns to other domains.</li>\n<li>Partner with Finance and Engineering leadership to translate Cloud Spend into actionable engineering tasks (e.g., refactor Service X to use Spot instances).</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years of software engineering experience, with a strong focus on public cloud infrastructure (AWS/GCP/Azure) and large-scale distributed systems.</li>\n<li>Engineer-First Mindset: You are comfortable writing code (Go, Python, Java) to solve infrastructure problems. You don&#39;t just ask for a report; you build the API that generates it.</li>\n<li>Deep Cloud Expertise: You have mastery over Kubernetes, container orchestration, and cloud-native storage, understanding exactly how architectural choices impact the bottom line.</li>\n<li>Operational Excellence: Proven track record of building observability pipelines (Prometheus, Grafana, Datadog) that drive operational and financial alerts.</li>\n<li>Influential Leader: Skilled at driving clarity in ambiguous spaces. You can convince a Principal Engineer to refactor their service for cost efficiency because you can prove the technical and business value.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience building custom FinOps tooling or internal developer platforms.</li>\n<li>Background in performance engineering or capacity planning for high-traffic ad tech environments.</li>\n<li>Contributions to open-source projects related to cloud efficiency or observability.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0396ac1c-dad","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit Inc.","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7628291","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$232,500-$325,500 USD","x-skills-required":["public cloud infrastructure","large-scale distributed systems","Kubernetes","container orchestration","cloud-native storage","observability pipelines","Prometheus","Grafana","Datadog"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:43.900Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"public cloud infrastructure, large-scale distributed systems, Kubernetes, container orchestration, cloud-native storage, observability pipelines, Prometheus, Grafana, Datadog","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":232500,"maxValue":325500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9537437b-e23"},"title":"Staff Backend Engineer, Knowledge Graph (Rust)","description":"<p>As a Staff Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help design, scale, and operate a high-impact graph data service that underpins agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>\n<p>You&#39;ll partner with a small, senior Rust-first team to ship reliable graph capabilities and make them easy for other teams and agents to use. The Knowledge Graph service is a distributed SDLC indexing system. It builds a property graph from GitLab SDLC (software development lifecycle) and code data using ClickHouse, NATS JetStream, and the Data Insights Platform. It also exposes secure graph queries and MCP tools for AI agents and product features.</p>\n<p>In this role, you&#39;ll own core parts of the system end to end: shaping the architecture, hardening multi-tenant behavior and performance, and making it straightforward for other teams and agents to consume graph capabilities. In your first year, you&#39;ll take clear ownership of major areas of the service (for example, the graph query engine, SDLC indexing, or multi-tenant authorization), reduce single points of failure through better runbooks and shared context, and raise the bar on how we design, build, and operate analytical services across the stack.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the design and evolution of core Knowledge Graph services in a production Rust codebase, including the graph query engine, SDLC and code indexing pipelines, and API/MCP surfaces that other GitLab teams and AI agents rely on.</li>\n</ul>\n<ul>\n<li>Owning complex, cross-cutting initiatives that span GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform, from technical direction and design docs through implementation, rollout, and iteration.</li>\n</ul>\n<ul>\n<li>Driving system design decisions that improve reliability, scalability, and maintainability for analytical (OLAP-style) graph workloads. This includes multi-hop traversals, aggregations, and multi-tenant isolation. Document trade-offs so the broader team can move quickly and stay aligned.</li>\n</ul>\n<ul>\n<li>Defining and improving operational maturity for the service, including service level objectives (SLOs), observability, runbooks, incident response, capacity planning, and production readiness (PREP) for GitLab.com, Dedicated, and Self-Managed deployments.</li>\n</ul>\n<ul>\n<li>Collaborating asynchronously with product, data, infrastructure, security, and AI teams to sequence work, unblock platform-level dependencies, and land features in a way that is safe for customers and sustainable for the team.</li>\n</ul>\n<ul>\n<li>Applying AI-assisted development workflows responsibly (for example, using MCP-aware tools, Knowledge Graph-backed agents, and internal Duo tooling) and help establish practical norms for how the team uses AI while maintaining strong engineering judgment.</li>\n</ul>\n<ul>\n<li>Mentoring and supporting other engineers through pairing, technical design reviews, and knowledge-sharing, reinforcing shared ownership of the system and its operational sustainability.</li>\n</ul>\n<ul>\n<li>Contributing across the stack when needed, including occasional Ruby (Rails integration and authorization paths) or frontend work (for example, the Software Architecture Map UI) to close gaps and keep delivery moving.</li>\n</ul>\n<p>This role requires significant experience building and operating production backend systems, with a track record of owning reliability, maintainability, and on-call readiness for services that support other product teams or platforms. Strong engineering skills in Rust or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive backend codebase are essential. Additionally, strong system design skills, including making and explaining clear architectural decisions, documenting constraints, and aligning trade-offs with product and platform needs, are necessary.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9537437b-e23","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8481945002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","ClickHouse","NATS JetStream","Data Insights Platform","graph data modeling","query patterns","property graphs","Cypher/GQL","n-hop traversals","aggregations","multi-tenant isolation","service level objectives","observability","runbooks","incident response","capacity planning","production readiness","AI-assisted development workflows","MCP-aware tools","Knowledge Graph-backed agents","internal Duo tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:38.397Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, ClickHouse, NATS JetStream, Data Insights Platform, graph data modeling, query patterns, property graphs, Cypher/GQL, n-hop traversals, aggregations, multi-tenant isolation, service level objectives, observability, runbooks, incident response, capacity planning, production readiness, AI-assisted development workflows, MCP-aware tools, Knowledge Graph-backed agents, internal Duo tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_72ebb09d-b37"},"title":"Staff+ Software Engineer, Observability","description":"<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p>Strong Candidates May Also Have:</p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_72ebb09d-b37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5139910008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["observability","monitoring","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","operating system administration","cloud computing","containerization","DevOps"],"datePosted":"2026-04-18T15:51:29.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3799893d-192"},"title":"Principal Engineer, Gemini App Infrastructure","description":"<p>As the Principal Engineer, you will focus on architecting and building the flagship Gemini App infrastructure. You will serve as the technical anchor for the application and orchestration layer, owning the code quality, architectural decisions, and system design of new design systems and functionality.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Architecting the Gemini app serving and orchestration layers, writing design docs, and defining interfaces to ensure the codebase is scalable, modular, and capable of supporting rapid innovation.</li>\n<li>Designing and implementing robust CI/CD pipelines and experimentation platforms, building tooling that enables the wider engineering team to utilize A/B testing and feature flags to safely and quickly iterate.</li>\n<li>Driving application performance initiatives, debugging complex production issues, and advocating for code quality standards to ensure the infrastructure scales to our product needs.</li>\n<li>Acting as the strategic technical counterpart to product and design leadership, assessing feasibility of ambitious concepts, and proposing technical solutions that turn AI capabilities into reality.</li>\n<li>Mentoring staff and senior engineers, leading code reviews, and fostering a culture of technical accuracy, psychological safety, and user-centricity.</li>\n</ul>\n<p>In order to set you up for success, we look for the following skills and experience:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or Engineering, or equivalent practical experience.</li>\n<li>15 years of experience in software engineering, building and working with systems in the technology organization.</li>\n</ul>\n<p>In addition, the following would be an advantage:</p>\n<ul>\n<li>Experience building large-scale serving infrastructure.</li>\n<li>Experience implementing observability, telemetry, and real-time monitoring strategies.</li>\n<li>Ability to design and refactor complex server-side architectures that have scaled, ideally at the &gt;1 billion user scale.</li>\n<li>Ability to analyze data to identify bottlenecks and drive technical decisions regarding performance optimizations.</li>\n<li>Ability to unblock teams by solving the hardest technical problems, balancing technical debt with feature work, and driving predictable delivery through architectural clarity.</li>\n<li>Ability to drive technical consensus across multiple teams and stakeholders, translating technical constraints into clear options for leadership.</li>\n</ul>\n<p>The US base salary range for this full-time position is between $307,000 - $427,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3799893d-192","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7793048","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$307,000 - $427,000 + bonus + equity + benefits","x-skills-required":["Bachelor's degree in Computer Science or Engineering","15 years of experience in software engineering","Experience building large-scale serving infrastructure","Experience implementing observability, telemetry, and real-time monitoring strategies","Ability to design and refactor complex server-side architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:27.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree in Computer Science or Engineering, 15 years of experience in software engineering, Experience building large-scale serving infrastructure, Experience implementing observability, telemetry, and real-time monitoring strategies, Ability to design and refactor complex server-side architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":307000,"maxValue":427000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ae48270-bef"},"title":"Senior Software Engineer, Storage Engineer","description":"<p>The Storage Engine Organisation at CoreWeave is responsible for the product capabilities and data plane function of CoreWeave&#39;s managed storage products.</p>\n<p>We build reliable, scalable storage solutions with segment leading performance. Storage engine works with engineering teams across infrastructure, compute, and platform to ensure our storage services meet the needs of the world&#39;s most demanding AI workloads.</p>\n<p>The role involves designing and implementing distributed storage solutions to support scaling data-intensive AI workloads, contributing to the development of exabyte-scale, S3-compatible object storage, and integrating dedicated storage clusters into diverse customer environments.</p>\n<p>Key responsibilities include working with technologies such as RDMA, GPU Direct Storage, and distributed filesystems protocols like NFS or FUSE to optimise storage performance and efficiency, participating in efforts to improve the reliability, durability, and observability of our storage stack, collaborating with operations teams to monitor, troubleshoot, and improve storage systems in production environments, and helping develop metrics and dashboards to provide visibility into storage performance and health.</p>\n<p>The ideal candidate will have a strong background in storage systems engineering or infrastructure, with experience working with object storage or distributed filesystems in production environments, proficiency in a systems programming language like Go, C, or Rust, and familiarity with storage observability tools and telemetry pipelines.</p>\n<p>As a senior software engineer, you will be responsible for designing, developing, and deploying scalable and efficient storage solutions, working closely with cross-functional teams to ensure seamless integration with other components of the platform, and mentoring junior engineers to help them grow in their roles.</p>\n<p>If you are passionate about building high-performance storage solutions and have a strong background in software engineering, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ae48270-bef","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4643524006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $204,000","x-skills-required":["Storage systems engineering","Infrastructure","Object storage","Distributed filesystems","RDMA","GPU Direct Storage","NFS","FUSE","Systems programming languages (Go, C, Rust)","Storage observability tools","Telemetry pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:26.395Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ/ New York , NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Storage systems engineering, Infrastructure, Object storage, Distributed filesystems, RDMA, GPU Direct Storage, NFS, FUSE, Systems programming languages (Go, C, Rust), Storage observability tools, Telemetry pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":204000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f2c6f765-eca"},"title":"Staff Engineer, Storage Control Plane","description":"<p>We&#39;re looking for a Staff Storage Engineer to play a key role in designing, building, and operating the control plane for our high-performance AI storage platform. You&#39;ll help evolve CoreWeave&#39;s storage systems by building reliable, scalable, and high-throughput solutions that power some of the largest and innovative AI workloads in the world.</p>\n<p>This role involves close collaboration with teams across infrastructure, compute, and platform to ensure our storage services scale automatically and seamlessly while maximizing performance and reliability.</p>\n<p>About the role:</p>\n<ul>\n<li>Design and implement a highly scalable multi-tenant control plane that supports CoreWeave&#39;s growing AI storage and cloud infrastructure needs.</li>\n</ul>\n<ul>\n<li>Contribute to the development of exabyte-scale, S3-compatible object storage, distributed file system and integrate dedicated storage clusters into diverse customer environments.</li>\n</ul>\n<ul>\n<li>Work with technologies such as RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, and distributed filesystems to optimize storage performance and efficiency.</li>\n</ul>\n<ul>\n<li>Participate in efforts to improve the reliability, durability, and observability of our storage stack.</li>\n</ul>\n<ul>\n<li>Collaborate with operations teams to monitor, analyze, and optimize storage systems using telemetry, metrics, and dashboards to improve performance, latency, and resilience.</li>\n</ul>\n<ul>\n<li>Work cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack.</li>\n</ul>\n<ul>\n<li>Share your knowledge and mentor other engineers on best practices in building distributed, high-performance systems.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>10+ years of experience working in storage systems engineering or infrastructure.</li>\n</ul>\n<ul>\n<li>Strong hands-on experience with object storage or distributed filesystems in production environments.</li>\n</ul>\n<ul>\n<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar.</li>\n</ul>\n<ul>\n<li>Proficiency in a systems programming language such as Go, C++, or Rust.</li>\n</ul>\n<ul>\n<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana).</li>\n</ul>\n<ul>\n<li>Solid understanding of cloud-native infrastructure, Kubernetes, and scalable system architecture.</li>\n</ul>\n<ul>\n<li>Strong debugging and problem-solving skills in distributed, high-performance environments.</li>\n</ul>\n<ul>\n<li>Clear communicator, able to work collaboratively across teams and share technical insights effectively.</li>\n</ul>\n<p>Wondering if you&#39;re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we&#39;ve found compatible with our team. If some of this describes you, we&#39;d love to talk.</p>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n</ul>\n<ul>\n<li>100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f2c6f765-eca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4669836006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["object storage","distributed filesystems","storage protocols","file systems","cloud-native infrastructure","Kubernetes","scalable system architecture","systems programming language","Go","C++","Rust","storage observability tools","telemetry pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:06.353Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA / Dallas, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"object storage, distributed filesystems, storage protocols, file systems, cloud-native infrastructure, Kubernetes, scalable system architecture, systems programming language, Go, C++, Rust, storage observability tools, telemetry pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86696218-8f0"},"title":"Staff Backend Engineer (Ruby on Rails/AI), Verify","description":"<p>As a Staff Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a central role in how we integrate AI into CI/CD workflows. Your work will impact performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>\n<p>In this role, you&#39;ll go beyond using AI tools and help define how we design, build, and iterate on AI-assisted and agentic CI experiences. You&#39;ll set standards for what good looks like across our AI agent portfolio, including how we measure success, how we instrument behavior in production, and how we account for large language model limitations. You&#39;ll also help responsibly integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>\n<p>We have ambitious goals for Agentic CI in FY27. As a Staff Engineer, you will:</p>\n<ul>\n<li>Partner with Engineering, Product, and UX leadership to pressure-test our priorities: where we can move faster, where we&#39;re missing data, and where there&#39;s whitespace to innovate. Part of this includes learning and growing with the Engineering team you will collaborate closely with.</li>\n</ul>\n<ul>\n<li>Define what success looks like across our agent portfolio and make sure we&#39;re tracking against it , not just shipping, but learning.</li>\n</ul>\n<ul>\n<li>Bring a sharp eye to the competitive landscape, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>\n</ul>\n<p>Examples of Agentic CI work we have planned for the upcoming year:</p>\n<ul>\n<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>\n</ul>\n<ul>\n<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>\n</ul>\n<ul>\n<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what&#39;s working, catch what isn&#39;t, and iterate with confidence.</li>\n</ul>\n<ul>\n<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>\n</ul>\n<p>You&#39;ll shape and scale GitLab CI backend infrastructure to improve performance, reliability, and usability for users running jobs at high volume. You&#39;ll design and implement AI-powered features for Agentic CI, including agents, agentic flows, and LLM-backed tooling that integrates with GitLab&#39;s Duo Agent Platform. You&#39;ll define what success looks like for AI in CI before you build, including baselines, measurable outcomes, and clear signals that help the team learn and iterate. You&#39;ll build the instrumentation and observability needed to make AI-assisted CI trustworthy in production, including feature behavior metrics, dashboards, and safeguards. You&#39;ll own and drive measurable performance improvements across CI systems (for example, database access patterns, background processing, and job orchestration) by forming hypotheses, running experiments, and validating results with data. You&#39;ll write secure, well-tested, maintainable Ruby on Rails code in a large monolith, improving existing features while reducing technical debt and operational risk. You&#39;ll lead cross-functional technical work with Product, UX, and Infrastructure, influencing architecture and execution across the Verify stage. You&#39;ll share standards, patterns, and learnings with other engineers, raising the bar for responsible AI integration and evidence-driven engineering across CI.</p>\n<p>This role requires advanced proficiency with Ruby and Ruby on Rails, with experience building and maintaining reliable backend services in a large codebase. You should have strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation. You should have hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains. You should have practical experience designing and shipping AI-powered backend features and integrations, including sound judgment about large language model limitations and responsible use in production. You should have a data-driven approach to engineering: defining hypotheses, establishing baseline metrics, instrumenting changes, and measuring outcomes against clear success criteria. You should have familiarity with observability patterns and tools (metrics, logging, tracing) to diagnose issues, improve reliability, and guide iteration. You should have strong backend architecture and delivery practices, including secure design, well-tested code, and strategies for safe rollouts and zero-downtime changes. You should have clear written and verbal communication skills, including writing technical proposals and documentation, and collaborating effectively in a remote, asynchronous, cross-functional environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86696218-8f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8448283002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ruby","Ruby on Rails","PostgreSQL","Data modeling","Query tuning","Scaling large tables","High-traffic production systems","CI","Workflow orchestration","Infrastructure-heavy domains","AI-powered backend features","Large language model limitations","Responsible use in production","Data-driven approach to engineering","Observability patterns","Metrics","Logging","Tracing","Backend architecture","Delivery practices","Secure design","Well-tested code","Safe rollouts","Zero-downtime changes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:58.310Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, APAC; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US; Remote, US-Southeast"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby, Ruby on Rails, PostgreSQL, Data modeling, Query tuning, Scaling large tables, High-traffic production systems, CI, Workflow orchestration, Infrastructure-heavy domains, AI-powered backend features, Large language model limitations, Responsible use in production, Data-driven approach to engineering, Observability patterns, Metrics, Logging, Tracing, Backend architecture, Delivery practices, Secure design, Well-tested code, Safe rollouts, Zero-downtime changes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a9d5360b-229"},"title":"Staff Platform Engineer - Infra + DevOps","description":"<p>We&#39;re looking for a seasoned Platform Engineer to join our team. As a leader in aging care innovation, Honor provides technology, tools, and services that empower older adults to live life on their own terms. Our platform engineering team builds and manages the infrastructure &amp; core services that powers Honor&#39;s Care Platform. We&#39;re seeking someone with at least 6 years of professional experience in a platform engineering team within a product-centric company. You will be responsible for designing, implementing, and maintaining scalable distributed systems &amp; infrastructure. Your expertise should include cloud platforms, advanced software design patterns &amp; architecture, operations and automation, and containerization technologies like Kubernetes. You will be joining a small team of highly-skilled, enthusiastic, and passionate engineers with an opportunity to create an outsized impact in contributing to the future evolution of Honor&#39;s Care Platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement foundational patterns and libraries for Python applications, across a range of technologies from API services to event processing</li>\n<li>Utilize Infrastructure as Code (IaC) tools to ensure reproducible and scalable environment setups</li>\n<li>Design and implement infrastructure for applications hosted on AWS, supporting event-driven systems, containerized services on Kubernetes, and serverless functions</li>\n<li>Develop and maintain robust CI/CD pipelines using tools such as Jenkins, ArgoCD</li>\n<li>Have experience automating the lifecycle management of code from development through production, including code promotion and configuration management</li>\n<li>Instrument observability through tools such as CloudWatch and DataDog to monitor and optimize application performance across multiple environments</li>\n<li>Scale infrastructure to meet increasing demand while managing cost effectively</li>\n<li>Have experience defining, instrumenting and measuring standards for quality, security, scalability, and availability with a focus on delivering business value</li>\n<li>Have passion for delivering turn-key developer experience for local development</li>\n<li>Keen interest in developing talent through mentorship</li>\n<li>Strong written and verbal communication, tailored to a variety of audiences</li>\n<li>A strategic thinker with a product-first approach and customer obsession</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>At least 6 years of professional experience in a platform engineering team within a product-centric company</li>\n<li>Experience working with an RPC architecture</li>\n<li>Experience working at or having worked at a technology startup and familiar with the challenges of evolving platform maturity</li>\n<li>First-hand experience navigating multiple distributed architecture patterns</li>\n</ul>\n<p>Our range reflects the hiring range for this position. We use national average to determine pay as we are a remote first company. Individual pay is based on a number of factors including qualifications, skills, experience, education, and training. Base pay is just a part of our total rewards program. Honor offers generous equity packages that increase with position level and responsibilities, and a 401K with up to a 4% employer match. We provide medical, dental and vision coverage including zero cost plans for employees. Short Term Disability, Long Term Disability and Life Insurance are fully employer paid with a voluntary additional Life Insurance option. We offer a generous time off program, mental health benefits, wellness program, and discount program.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a9d5360b-229","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Honor Technology","sameAs":"https://www.honortech.com/","logo":"https://logos.yubhub.co/honortech.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/honor/jobs/8297124002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$200,700-$223,000 USD","x-skills-required":["cloud platforms","advanced software design patterns & architecture","operations and automation","containerization technologies like Kubernetes","Infrastructure as Code (IaC)","AWS","event-driven systems","serverless functions","CI/CD pipelines","Jenkins","ArgoCD","observability","CloudWatch","DataDog","quality","security","scalability","availability"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:55.286Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote Position"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud platforms, advanced software design patterns & architecture, operations and automation, containerization technologies like Kubernetes, Infrastructure as Code (IaC), AWS, event-driven systems, serverless functions, CI/CD pipelines, Jenkins, ArgoCD, observability, CloudWatch, DataDog, quality, security, scalability, availability","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200700,"maxValue":223000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a24f057-4f1"},"title":"Staff Production Engineer","description":"<p>The Production Engineering Tools team builds and operates foundational platforms that make CoreWeave&#39;s cloud reliable, observable, and scalable. We are hiring a Staff Production Engineer to design, build, and own the foundational platforms and frameworks that underpin operational excellence across CoreWeave.</p>\n<p>In this role, you will combine deep technical leadership with hands-on engineering to create systems that improve availability, resiliency, and delivery velocity at scale. This is a high-impact role with broad organisational influence. You will develop a deep understanding of CoreWeave&#39;s infrastructure and services, shape architecture and tooling decisions, and partner closely with service owners to operationalise reliability through automation and paved paths rather than manual process or advocacy.</p>\n<p>Success requires the ability to pivot quickly between hot incidents, multi-team programs, and initiatives at all levels of the organisation. You will design, build, and own foundational platforms and frameworks from architecture through adoption and operation. You will lead technical strategy and execution for internal tooling that reduces manual operations, improves delivery velocity, and supports CoreWeave&#39;s revenue growth through faster, more reliable datacentre delivery.</p>\n<p>You will partner with service owners and platform teams to translate reliability and operational requirements into automation, self-service capabilities, and opinionated paved paths. You will build and evolve systems for observability, alerting, automated remediation, resiliency testing, and authoritative sources of truth, operationalising best practices through tooling rather than manual enforcement.</p>\n<p>You will participate in incident response for critical outages with the explicit goal of improving systems, tooling, and defaults to reduce future operational load,not as a long-term escalation path. You will ship production code, participate in on-call rotations as needed, and mentor engineers on platform ownership, operational design, and sustainable production practices.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a24f057-4f1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4644302006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["distributed systems","cloud platforms","Kubernetes","observability","incident practices","metrics","tracing","structured logs","SLIs/SLOs","PIRs"],"x-skills-preferred":["foundational internal platforms","service tiering","disaster recovery","chaos engineering","structured resilience programs"],"datePosted":"2026-04-18T15:50:55.257Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, cloud platforms, Kubernetes, observability, incident practices, metrics, tracing, structured logs, SLIs/SLOs, PIRs, foundational internal platforms, service tiering, disaster recovery, chaos engineering, structured resilience programs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d4cb202-a44"},"title":"Operations Program Manager, Managed Operations","description":"<p>We&#39;re seeking an experienced Operations Program Manager to join our team. As an Operations Program Manager, you will collaborate with cross-functional teams to build and scale operational infrastructure that supports the launch of innovative products. You will lead multiple projects simultaneously, managing timelines, resources, and stakeholder expectations to deliver impactful results that align with Stripe&#39;s goals.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with Engineering, Product Management, Design and other cross-functional teams to build and scale operational infrastructure that supports the launch of innovative products, ensuring a seamless user experience.</li>\n<li>Lead multiple projects simultaneously, managing timelines, resources, and stakeholder expectations to deliver impactful results that align with Stripe&#39;s goals.</li>\n<li>Ensure the operations team continuously delivers high quality customer outcomes, through quality monitoring, training, metric observability and process improvement.</li>\n<li>Partner directly with customers to develop Standard Operating Procedures for supporting new Stripe lines of business.</li>\n<li>Lead operational reviews with customers to communicate operational performance and uncover customer concerns.</li>\n<li>Identify critical operational requirements from partner banks and card networks and ensure we can fulfil them.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor’s degree or foreign equivalent in Business, Management Science, Computing, Finance or a related field, plus 2 years of experience in product or project strategy in a high-tech growth environment.</li>\n<li>2 years of experience in executing and delivering complex operational projects working directly with diverse, distributed teams and their cross-functional stakeholders.</li>\n<li>2 years of experience in working in a fast-paced work environment crafting rapid and strategic fixes to high intensity, sensitive problems.</li>\n<li>2 years of experience in crafting written and verbal communications designed to solve user problems.</li>\n</ul>\n<p>Salary: $140,400 - $210,600/yr.</p>\n<p>This salary range represents the base salary range for the role and any sales commissions/sales bonuses targets, if applicable, would be in addition to the base salary.</p>\n<p>40 hrs/week.</p>\n<p>50% telecommuting permitted.</p>\n<p>Travel Requirement: 5% for meetings and onsites.</p>\n<p>Additional benefits for this role may include: equity, company bonus or sales commissions/bonuses; 401(k) plan; medical, dental, and vision benefits; and wellness stipends.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d4cb202-a44","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe, LLC.","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7812209","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$140,400 - $210,600/yr.","x-skills-required":["project management","operational infrastructure","cross-functional teams","product launch","customer outcomes","quality monitoring","training","metric observability","process improvement","standard operating procedures","customer reviews","operational performance","critical operational requirements"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:45.055Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Finance","skills":"project management, operational infrastructure, cross-functional teams, product launch, customer outcomes, quality monitoring, training, metric observability, process improvement, standard operating procedures, customer reviews, operational performance, critical operational requirements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140400,"maxValue":210600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b6f9322-a9a"},"title":"Staff Engineer, Storage Engine","description":"<p>CoreWeave is seeking a Staff Engineer, Storage Engine to join their team. The successful candidate will design and implement distributed storage solutions to support scaling data-intensive AI workloads. They will contribute to the development of exabyte-scale, S3-compatible object storage and integrate dedicated storage clusters into diverse customer environments.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing and implementing distributed storage solutions to support scaling data-intensive AI workloads</li>\n<li>Contributing to the development of exabyte-scale, S3-compatible object storage</li>\n<li>Integrating dedicated storage clusters into diverse customer environments</li>\n<li>Working with technologies such as RDMA, GPU Direct Storage, and distributed filesystems protocols such as NFS or FUSE to optimize storage performance and efficiency</li>\n<li>Leading efforts to improve the reliability, durability, security, and observability of the storage stack</li>\n<li>Collaborating with operations teams to monitor, troubleshoot, and improve storage systems in production environments</li>\n<li>Setting the bar for developing metrics and dashboards to provide visibility into storage performance and health</li>\n<li>Analyzing telemetry and system data to drive improvements in throughput, latency, and resilience</li>\n<li>Working cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack</li>\n<li>Sharing knowledge and mentoring other engineers on best practices in building distributed, high-performance systems</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Bachelor&#39;s, Master&#39;s, or PhD degree in Computer Science, Engineering, or a related field</li>\n<li>8-10+ years of experience working in storage systems engineering or infrastructure</li>\n<li>Strong hands-on experience with object storage or distributed filesystems in production environments</li>\n<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar</li>\n<li>Proficiency in a systems programming language such as Go, C, or Rust</li>\n<li>Proficiency leveraging AI tools to augment software development</li>\n<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana)</li>\n<li>Experience working with cloud-native infrastructure, Kubernetes, and scalable system architectures</li>\n</ul>\n<p>The base salary range for this role is $188,000 to $275,000.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b6f9322-a9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4612047006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["distributed storage","object storage","S3-compatible object storage","RDMA","GPU Direct Storage","distributed filesystems protocols","NFS","FUSE","storage performance and efficiency","reliability","durability","security","observability","telemetry","system data","throughput","latency","resilience","cloud-native infrastructure","Kubernetes","scalable system architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:33.024Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed storage, object storage, S3-compatible object storage, RDMA, GPU Direct Storage, distributed filesystems protocols, NFS, FUSE, storage performance and efficiency, reliability, durability, security, observability, telemetry, system data, throughput, latency, resilience, cloud-native infrastructure, Kubernetes, scalable system architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3e220ef6-c60"},"title":"Enterprise Account Executive - Expand - North Central","description":"<p>We are looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion within strategic Enterprise accounts. You will be the owner of a defined territory where you will build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>This role sits at the intersection of sales execution, technical fluency, and cross-functional collaboration,and is critical to our growth in the Enterprise segment.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own your territory &amp; build pipeline: Develop and execute a proactive outbound cadence (email, call, social) that generates ≥50 % of your booked opportunities.</li>\n<li>Deep discovery &amp; qualification: Uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals.</li>\n<li>Value storytelling &amp; demos: Craft and deliver tailored narratives and live demos that map Elastic’s Search, Observability, and Security capabilities to measurable business outcomes.</li>\n<li>Mutual deal strategy &amp; forecast accuracy: Collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90 % forecast accuracy within ±10 %.</li>\n<li>Executive negotiation &amp; closing: Lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments.</li>\n<li>Domain &amp; cloud acumen: Position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</li>\n<li>Cross-functional partnership: Work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Proven SaaS quota-carrying success: 5+ years closing complex Enterprise deals, consistently overachieving targets in a consumption-based or usage-model environment.</li>\n<li>Expert discovery &amp; qualification skills: Demonstrated ability to apply MEDDPICC or equivalent frameworks to drive disciplined pipeline and eliminate low-probability deals.</li>\n<li>Compelling value storytellers: Track record of delivering executive-level presentations and demos that tie product capabilities to real dollars saved, revenue gained, or risk mitigated.</li>\n<li>Strong negotiation chops: History of landing multi-year, high-ACV contracts while protecting margin and securing executive stakeholder buy-in.</li>\n<li>Technical &amp; cloud fluency: Comfortable discussing a broad range of technical topics including observability, security, vector/traditional search, and cloud cost optimization.</li>\n<li>Collaborative mindset &amp; coachability: A learner who partners effectively with internal teams, incorporates feedback, and embodies Elastic’s values of community and openness.</li>\n<li>Open Source enthusiasm: Genuine appreciation for open-source communities and the Elastic model,bonus if you’ve sold or advocated in an OSS context.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Prior experience at an open-source or developer-centric infrastructure company.</li>\n<li>Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases.</li>\n</ul>\n<p>If you’re driven to build your own pipeline, master complex deal cycles, and help customers unlock the power of Search AI, we’d love to talk. Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3e220ef6-c60","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7782450","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$113,300-$179,200 USD","x-skills-required":["SaaS quota-carrying success","Expert discovery & qualification skills","Compelling value storytellers","Strong negotiation chops","Technical & cloud fluency"],"x-skills-preferred":["Open Source enthusiasm","Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:50:31.490Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"SaaS quota-carrying success, Expert discovery & qualification skills, Compelling value storytellers, Strong negotiation chops, Technical & cloud fluency, Open Source enthusiasm, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":113300,"maxValue":179200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a7d0cf0f-a3a"},"title":"Senior Engineer- Data Platforms","description":"<p>The Data Platform Team serves as the experts on managing data infrastructure for CoreWeave. Our data infrastructure includes managed databases, data ingestion, data flow, data lakes, and other data retrieval for CoreWeave and its customers.</p>\n<p>We are seeking senior software engineers with specialization in database and stream processing who can help us fulfill the goal of our global datastore strategy and establish communication models for our data flow. This individual will work with a team of mixed skilled engineers and have the opportunity to work on the full range of rewarding challenges that come with the business of building a cloud in a communicative, supportive, and high-performing environment.</p>\n<p>As a member of the Data Platform Team you will have the opportunity to:</p>\n<ul>\n<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>\n<li>Participate in operations and scaling of relational data platforms</li>\n<li>Develop a stream processing architecture and solve for scalability and reliability</li>\n<li>Improve the performance, security, reliability, and scalability of our data platforms and related services, and participate in the team’s on-call rotation</li>\n<li>Establish guidelines and guard rails for data access and storage for stakeholder teams</li>\n<li>Ensure compliance with standards for data protection regulation</li>\n<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself</li>\n</ul>\n<p>The ideal candidate will have 5+ years of experience in a software or infrastructure engineering industry, with experience operating services in production and at scale and familiarity with reliability engineering concepts such as different types of testing, progressive deployments, error budgets, observability, and fault-tolerant design.</p>\n<p>The base salary range for this role is $175,000 to $210,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a7d0cf0f-a3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4562276006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $210,000","x-skills-required":["database and stream processing","managed databases","data ingestion","data flow","data lakes","APIs","operational experience","reliability engineering","testing","progressive deployments","error budgets","observability","fault-tolerant design"],"x-skills-preferred":["Kubernetes","Go","Linux distributions","shell scripting","Linux storage and networking stacks"],"datePosted":"2026-04-18T15:50:18.835Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database and stream processing, managed databases, data ingestion, data flow, data lakes, APIs, operational experience, reliability engineering, testing, progressive deployments, error budgets, observability, fault-tolerant design, Kubernetes, Go, Linux distributions, shell scripting, Linux storage and networking stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c078633c-28c"},"title":"Senior Engineer, Core API - W&B","description":"<p>You will be responsible for building and evolving the core backend systems and shared infrastructure that power our platform.</p>\n<p>A significant portion of backend logic is shared across services, and this role will help define, maintain, and scale that foundation.</p>\n<p>You will own and improve internal schema and code generation tooling that ensures consistency and correctness across services.</p>\n<p>You will work on and extend our custom job scheduler, improving reliability, observability, and execution guarantees for distributed workloads.</p>\n<p>You will contribute to safely execute large-scale concurrent and distributed operations.</p>\n<p>You will play a key role in defining and maintaining API standards across teams, ensuring performance, backward compatibility, and clear evolution strategies.</p>\n<p>You will collaborate closely with Product and various Engineering teams to design systems that are reliable, scalable, and maintainable over time.</p>\n<p>The Core Systems team is responsible for the foundational backend infrastructure that powers Weights &amp; Biases within CoreWeave.</p>\n<p>Much of the platform&#39;s critical logic is shared across services, and this role sits at the center of that foundation.</p>\n<p>You will work on the systems that other engineers build upon , from execution frameworks and schedulers to schema tooling and API standards.</p>\n<p>This is a high-leverage role focused on durability, scalability, and long-term maintainability.</p>\n<p>The systems you design and evolve will directly impact reliability, developer velocity, and the ability of the platform to scale with growing workloads.</p>\n<p>You&#39;ll collaborate across teams to ensure that shared backend abstractions remain clean, performant, and consistent as we continue to expand our adoption of technologies like GraphQL and gRPC.</p>\n<p>If you enjoy owning deep technical infrastructure, shaping engineering standards, and building systems that other engineers depend on every day, this role offers meaningful scope and impact.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c078633c-28c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4658736006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["backend engineering experience","designing and maintaining distributed systems","hands-on experience designing and evolving APIs","strong proficiency in Go, Python, or a comparable backend systems language","experience implementing concurrency and parallelism patterns in production systems"],"x-skills-preferred":["familiarity with schema management, code generation tools, or interface definition systems","experience building or operating custom job schedulers, workflow engines, or execution frameworks","experience defining cross-team API standards and governance models","background in high-scale data or ML infrastructure systems","experience improving reliability through observability, metrics, and SLO-driven development practices"],"datePosted":"2026-04-18T15:50:16.703Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend engineering experience, designing and maintaining distributed systems, hands-on experience designing and evolving APIs, strong proficiency in Go, Python, or a comparable backend systems language, experience implementing concurrency and parallelism patterns in production systems, familiarity with schema management, code generation tools, or interface definition systems, experience building or operating custom job schedulers, workflow engines, or execution frameworks, experience defining cross-team API standards and governance models, background in high-scale data or ML infrastructure systems, experience improving reliability through observability, metrics, and SLO-driven development practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b9e4093f-593"},"title":"Security Software Engineer - Endpoint Security","description":"<p>We&#39;re seeking a Security Software Engineer to develop novel security tooling for securing embedded Linux systems and Android devices. The ideal candidate can develop, test, and debug an endpoint detection and response agent with mission-critical security responsibilities.</p>\n<p>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems. Implement an endpoint detection and response agent for use on Anduril products. Develop thorough testing and qualification procedures for security-critical components. Collaborate with cross-functional teams to identify specific security needs and implement solutions. Conduct code reviews and ensure adherence to security best practices. Stay updated on the latest security threats and technologies.</p>\n<p>Required qualifications:</p>\n<ul>\n<li>2+ years of software development experience in some combination of Golang, Rust, or C/C++.</li>\n<li>Experience with Linux observability and eBPF.</li>\n<li>Strong understanding of Linux security internals.</li>\n<li>Experience debugging and optimising performance of Linux software.</li>\n<li>Experience with CI/CD and test automation, including for mobile and embedded devices.</li>\n<li>Solid understanding of cybersecurity principles and practices.</li>\n<li>Ability to obtain and hold a U.S. Secret security clearance.</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Knowledge of security frameworks and compliance standards.</li>\n<li>Experience in mobile development, specifically on Android platforms.</li>\n<li>Experience implementing EDR tooling.</li>\n<li>Experience with SOC operations, forensics, and incident response practices.</li>\n<li>Strong problem-solving and analytical skills.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p>US Salary Range $126,000-$191,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b9e4093f-593","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5086964007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$126,000-$191,000 USD","x-skills-required":["Golang","Rust","C/C++","Linux observability","eBPF","Linux security internals","CI/CD","test automation","cybersecurity principles","U.S. Secret security clearance"],"x-skills-preferred":["security frameworks","compliance standards","mobile development","EDR tooling","SOC operations","forensics","incident response practices"],"datePosted":"2026-04-18T15:50:01.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Rust, C/C++, Linux observability, eBPF, Linux security internals, CI/CD, test automation, cybersecurity principles, U.S. Secret security clearance, security frameworks, compliance standards, mobile development, EDR tooling, SOC operations, forensics, incident response practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":126000,"maxValue":191000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f14ee3e5-931"},"title":"Software Engineer, UI Platform","description":"<p>As a Software Engineer on the UI Platform team at Anthropic, you will be hands-on building the platform that other engineers depend on every day.scope of work includes designing and shipping shared components and design-system-level abstractions, evolving the backend-for-frontend (BFF) APIs that power our client applications, and improving the build, deploy, and observability systems that keep Claude.ai running smoothly across surfaces.</p>\n<p>This is a great fit if you care deeply about developer experience and want your engineering work to have outsized leverage: instead of shipping one feature, you&#39;re building the tools and systems that make dozens of features possible.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build shared UI components, libraries, and abstractions that product teams across Anthropic use to ship consistently and efficiently on web and mobile</li>\n</ul>\n<ul>\n<li>Contribute to the BFF API layer that powers Claude.ai&#39;s client applications,thinking carefully about clean contracts, performance, and reliability at the boundary between frontend and backend</li>\n</ul>\n<ul>\n<li>Improve developer velocity across the organization by reducing friction in our build, deploy, and testing pipelines</li>\n</ul>\n<ul>\n<li>Work on performance and reliability: identify and resolve latency issues, improve observability, and help establish high standards that the rest of the platform team can build on</li>\n</ul>\n<ul>\n<li>Partner closely with product engineering teams to understand their needs, unblock them when possible, and shape platform investments around where the most impact is</li>\n</ul>\n<ul>\n<li>Help maintain and evolve documentation and tooling that make the platform approachable for engineers joining or building on top of it</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 5+ years of software engineering experience, with significant time spent building shared platforms, developer tools, or infrastructure that other engineers rely on</li>\n</ul>\n<ul>\n<li>Have strong practical skills in modern web technologies (React, TypeScript, Next.js) and experience designing or consuming APIs that serve frontend applications</li>\n</ul>\n<ul>\n<li>Care about developer experience and have a track record of building things that make other engineers more productive</li>\n</ul>\n<ul>\n<li>Have solid instincts around reliability, observability, and performance,and enjoy operationalizing those instincts in production systems</li>\n</ul>\n<ul>\n<li>Thrive in fast-paced, collaborative environments and enjoy working closely with cross-functional partners</li>\n</ul>\n<ul>\n<li>Pick up slack, even if it goes outside your job description</li>\n</ul>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>Building shared component libraries or design systems for multiple surfaces (web, mobile, desktop)</li>\n</ul>\n<ul>\n<li>BFF architectures and API patterns that balance flexibility with consistency across client platforms</li>\n</ul>\n<ul>\n<li>Performance optimization and latency reduction in consumer-facing applications</li>\n</ul>\n<ul>\n<li>CI/CD, build systems, and deployment automation</li>\n</ul>\n<ul>\n<li>Observability and monitoring (metrics, logging, tracing)</li>\n</ul>\n<ul>\n<li>Working on AI/ML products or in rapidly evolving product environments</li>\n</ul>\n<p>Candidates need not have:</p>\n<ul>\n<li>100% of the skills needed to perform the job</li>\n</ul>\n<ul>\n<li>Formal certifications or education credentials</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f14ee3e5-931","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4673416008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["software engineering","UI platform","shared components","design-system-level abstractions","backend-for-frontend (BFF) APIs","build","deploy","observability systems","developer experience","React","TypeScript","Next.js","APIs"],"x-skills-preferred":["performance optimization","latency reduction","CI/CD","build systems","deployment automation","observability and monitoring"],"datePosted":"2026-04-18T15:50:00.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, UI platform, shared components, design-system-level abstractions, backend-for-frontend (BFF) APIs, build, deploy, observability systems, developer experience, React, TypeScript, Next.js, APIs, performance optimization, latency reduction, CI/CD, build systems, deployment automation, observability and monitoring","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_db7b0f51-7df"},"title":"Senior Cloud Support Engineer","description":"<p>As a Senior Cloud Support Engineer at CoreWeave, you&#39;ll be on the front lines of a technological revolution, empowering our customers to harness the full potential of our advanced Kubernetes-powered HPC cloud infrastructure.</p>\n<p>You&#39;ll be hands-on, collaborating with engineers and researchers to resolve issues that impact high-profile, mission-critical applications and cutting-edge AI training workloads. Your contributions will be pivotal in ensuring seamless performance, reliability, and success for our customers, positioning you at the very core of transformative technologies reshaping industries worldwide at a company that is truly one of a kind.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Guide and mentor team members in developing their technical skills and troubleshooting capabilities across all disciplines supported by CoreWeave.</li>\n<li>Provide real-time feedback and coaching, reviewing tickets to identify opportunities for improvement and ensure quality assurance (QA).</li>\n<li>Develop and deliver training sessions to improve the team&#39;s proficiency and efficiency in resolving customer issues.</li>\n<li>Use technical expertise to investigate, debug, and resolve customer-impacting issues with the curiosity required to uncover and understand root causes.</li>\n<li>Maintain high customer satisfaction through swift, accurate, and empathetic high-touch support communications, as well as established best practices.</li>\n<li>Help design and implement troubleshooting best practices to ensure fast, accurate client resolutions.</li>\n<li>Contribute to refining processes, workflows, and playbooks for handling complex customer challenges.</li>\n<li>Serve as a technical escalation point for high-priority escalations or complex cases, modeling effective problem-solving approaches.</li>\n<li>Lead the creation of knowledge-sharing resources, including documentation, tutorials, and how-to guides.</li>\n<li>Enhance the support team&#39;s knowledge of CoreWeave&#39;s products and services through continuous learning initiatives.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>Have a Bachelor&#39;s degree in Information Science / Information Technology, Data Science, Computer Science, Engineering, Mathematics, Physics, or a related field, OR equivalent experience in a technical position</li>\n<li>At least 5+ years of experience in cloud support, systems administration, or related technical support-focused roles</li>\n<li>Proven hands-on work experience with Kubernetes</li>\n<li>Experience with networking, load balancing, storage volumes, observability, node management, High-Performance Computing (HPC), and Linux system administration</li>\n<li>Proven ability to mentor team members, foster technical growth, and improve team-wide capabilities through guidance and feedback</li>\n<li>Experience with observability tools such as Grafana</li>\n<li>Strong troubleshooting skills, with experience resolving complex customer issues and driving quality assurance through ticket reviews or similar processes</li>\n<li>Demonstrated success collaborating with cross-functional teams to refine workflows, implement best practices, and advocate for necessary tools or process changes</li>\n<li>Excellent written and verbal communication skills, with a track record of simplifying complex concepts for diverse audiences</li>\n<li>Strong technical presentation skills, with experience delivering precise, engaging, and informative presentations to technical and non-technical audiences, effectively showcasing complex concepts and solutions</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>CKA Certified</li>\n<li>Demonstrated experience with training, coaching, and creating onboarding materials.</li>\n<li>Operates in a fast-paced, global, 24/7 support team environment</li>\n<li>Ability to collaborate across different time zones</li>\n<li>On-site office environment, hybrid, or remote options depending on location</li>\n<li>Flexible to travel up to 10% (~25 days/year)</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_db7b0f51-7df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4568136006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,000 to $163,000","x-skills-required":["cloud support","systems administration","Kubernetes","networking","load balancing","storage volumes","observability","node management","High-Performance Computing (HPC)","Linux system administration"],"x-skills-preferred":["CKA Certified","training","coaching","onboarding materials","fast-paced global support team environment","collaboration across different time zones"],"datePosted":"2026-04-18T15:49:50.841Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud support, systems administration, Kubernetes, networking, load balancing, storage volumes, observability, node management, High-Performance Computing (HPC), Linux system administration, CKA Certified, training, coaching, onboarding materials, fast-paced global support team environment, collaboration across different time zones","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122000,"maxValue":163000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5d1ad433-760"},"title":"Part-Time Obstetrics Care Provider (CNM or NP)","description":"<p><strong>Job Summary</strong></p>\n<p>As a Part-Time Obstetrics Care Provider (CNM or NP) at Pomelo Care, you will provide direct patient care and clinical oversight that optimizes outcomes for pregnant people and newborns through population-based implementation of evidence-based care.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Be accountable for improving clinical outcomes for empaneled patients, by overseeing their medical care</li>\n<li>Attend daily clinical huddles focused on collaboration across a clinical pod, including RNs, LCSW, and RDs</li>\n<li>Review complex patient cases, develop care plans, and support other members of the clinical team in providing them with evidence-based care</li>\n<li>Monitor adverse events and hold clinical retros to identify any areas for improvement in Pomelo’s protocols</li>\n<li>Lead development and review of evidence-based medical protocols and algorithms related to obstetric and women’s health</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Must have active compact RN license</li>\n<li>Minimum of 4 years of experience as an APP</li>\n<li>Extensive obstetric experience, including treating high-risk patients, as well as some experience caring for infants</li>\n<li>A passion for and demonstrated effectiveness in optimizing evidence-based care and perinatal outcomes</li>\n<li>Experience using data to drive patient engagement, activation, and outcomes</li>\n<li>Experience leading successful teams, with track record of outstanding collaboration and teamwork</li>\n<li>A sense of urgency to improve outcomes coupled with exceptional organization and attention to detail</li>\n<li>A growth mindset with the ability to approach process change and ambiguous situations with enthusiasm, creativity, and accountability</li>\n<li>Facility using multiple tech platforms, with an eagerness for advising about platform improvements and adapting to new systems</li>\n<li>Eager to thrive in a fast-paced, metric-driven environment</li>\n<li>Phenomenal interpersonal and communication skills</li>\n</ul>\n<p><strong>Education and Training</strong></p>\n<ul>\n<li>NP or CNM with significant experience in obstetrics and some experience in infant care</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Telehealth and/or remote monitoring experience</li>\n<li>Experience in outpatient or home-based management of higher-risk patients</li>\n</ul>\n<p><strong>Schedule</strong></p>\n<ul>\n<li>2x12 - 11am - 11pm ET</li>\n<li>Set schedule, 2 weekdays per week</li>\n<li>No weekends required</li>\n</ul>\n<p><strong>Why You Should Join Our Team</strong></p>\n<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. And you&#39;ll learn, grow, be challenged, and have fun with your team while doing it.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5d1ad433-760","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pomelo Care","sameAs":"https://www.pomelocare.com/","logo":"https://logos.yubhub.co/pomelocare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pomelocare/jobs/5825161004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"part-time","x-salary-range":null,"x-skills-required":["Active compact RN license","Minimum of 4 years of experience as an APP","Extensive obstetric experience","A passion for and demonstrated effectiveness in optimizing evidence-based care and perinatal outcomes","Experience using data to drive patient engagement, activation, and outcomes","Experience leading successful teams","A sense of urgency to improve outcomes","Exceptional organization and attention to detail","A growth mindset","Facility using multiple tech platforms"],"x-skills-preferred":["Telehealth and/or remote monitoring experience","Experience in outpatient or home-based management of higher-risk patients"],"datePosted":"2026-04-18T15:49:48.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"PART_TIME","occupationalCategory":"Healthcare","industry":"Healthcare","skills":"Active compact RN license, Minimum of 4 years of experience as an APP, Extensive obstetric experience, A passion for and demonstrated effectiveness in optimizing evidence-based care and perinatal outcomes, Experience using data to drive patient engagement, activation, and outcomes, Experience leading successful teams, A sense of urgency to improve outcomes, Exceptional organization and attention to detail, A growth mindset, Facility using multiple tech platforms, Telehealth and/or remote monitoring experience, Experience in outpatient or home-based management of higher-risk patients"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f6f9b8da-ade"},"title":"Senior Account Executive","description":"<p>We&#39;re looking for a high-energy Senior Account Executive to drive net-new revenue and expansion with both enterprise and mid-market accounts in Sweden. You&#39;ll be the owner of a defined territory where you&#39;ll build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>As a Senior Account Executive, you&#39;ll be responsible for developing and executing a proactive outbound cadence that generates ≥50% of your booked opportunities. You&#39;ll uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals. You&#39;ll craft and deliver tailored narratives and live demos that map Elastic&#39;s Search, Observability, and Security capabilities to measurable business outcomes.</p>\n<p>You&#39;ll collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90% forecast accuracy within ±10%. You&#39;ll lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments. You&#39;ll position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</p>\n<p>As a member of our team, you&#39;ll work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes. You&#39;ll be a learner who partners effectively with internal teams, incorporates feedback, and embodies Elastic&#39;s values of community and openness.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f6f9b8da-ade","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7673166","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Proven SaaS quota-carrying success","Expert discovery and qualification skills","Compelling value storyteller","Strong negotiation chops","Technical and cloud fluency"],"x-skills-preferred":["Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:49:39.861Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sweden"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Proven SaaS quota-carrying success, Expert discovery and qualification skills, Compelling value storyteller, Strong negotiation chops, Technical and cloud fluency, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f838587f-1ee"},"title":"Software Engineer, Kubernetes","description":"<p>We&#39;re looking for a skilled Software Engineer to join our team and help us build and scale our Kubernetes environment. As a Software Engineer, you will play a key part in ensuring the availability, reliability, and scalability of our cloud infrastructure. You will drive operational excellence, implement robust automation, and help shape the systems that keep our cloud running smoothly.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build, operate, and scale Kubernetes-based production infrastructure that delivers our products with high reliability and performance.</li>\n<li>Develop automation, tooling, and infrastructure as code in Go and other infrastructure-focused languages to enable zero-touch operations, rapid recovery, and seamless deployments.</li>\n<li>Design, implement, and maintain monitoring, alerting, and observability solutions,leveraging the Grafana ecosystem and related tools,to proactively identify and resolve production issues.</li>\n<li>Drive incident response efforts, participate in on-call rotations, and lead root cause analysis to prevent recurrence and improve incident handling processes.</li>\n<li>Partner with internal and cross-functional teams to ensure platform capabilities meet rigorous operational requirements and customer SLAs.</li>\n<li>Engineer for resiliency, implementing best practices for redundancy, fault tolerance, and disaster recovery across complex distributed systems.</li>\n<li>Advocate for security, reliability, and performance improvements throughout the stack, continuously seeking opportunities to strengthen operational standards.</li>\n<li>Contribute to the development of custom Kubernetes operators and intelligent orchestration frameworks that optimize AI workload performance and resource utilization at scale.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience in production engineering, SRE, or large-scale infrastructure/platform roles.</li>\n<li>Knowledgeable in Kubernetes administration, container orchestration, and microservices architectures, with a bias for automating every aspect of operations.</li>\n<li>Proven track record managing high-uptime, customer-facing systems in a fast-moving environment, with experience delivering measurable improvements in reliability and performance.</li>\n<li>Experience in monitoring, observability, and incident management using tools like Prometheus, Grafana, Datadog, Splunk, Loki, or VictoriaMetrics.</li>\n<li>Deep understanding of Linux systems and infrastructure-focused programming, especially in Go and Bash.</li>\n<li>Strong analytical skills and ability to troubleshoot complex production issues.</li>\n<li>Excellent communication skills and ability to share knowledge with technical and non-technical stakeholders.</li>\n</ul>\n<p>What Success Looks Like:</p>\n<ul>\n<li>Deliver stable, robust, and highly-available systems that consistently meet or exceed uptime and performance targets.</li>\n<li>Champion initiatives that drive automation, reduce operational toil, and increase the efficiency of incident response.</li>\n<li>Actively contribute to a blameless culture of learning, mentoring others in operational best practices and production engineering principles.</li>\n<li>Help CoreWeave maintain industry leadership through flawless execution in supporting demanding, AI-powered workloads at scale.</li>\n</ul>\n<p>Why CoreWeave?</p>\n<ul>\n<li>We work hard, have fun, and move fast!</li>\n<li>We&#39;re in an exciting stage of hyper-growth that you won&#39;t want to miss out on.</li>\n<li>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</li>\n<li>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</li>\n</ul>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $120,000 to $176,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer:</p>\n<ul>\n<li>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</li>\n<li>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</li>\n</ul>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace:</p>\n<ul>\n<li>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f838587f-1ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4577764006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$120,000 to $176,000","x-skills-required":["Kubernetes administration","container orchestration","microservices architectures","Go","Bash","Linux systems","monitoring","observability","incident management","Prometheus","Grafana","Datadog","Splunk","Loki","VictoriaMetrics"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:38.881Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes administration, container orchestration, microservices architectures, Go, Bash, Linux systems, monitoring, observability, incident management, Prometheus, Grafana, Datadog, Splunk, Loki, VictoriaMetrics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120000,"maxValue":176000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d0aa9e42-473"},"title":"Manager Customer Architecture - EMEA Central","description":"<p>We are actively seeking a Manager for our Customer Architects (CA) in EMEA Central with demonstrable experience in leading successful teams. You will have an understanding of technology and hands-on experience in key IT domains, notably Observability, Cybersecurity, and Enterprise Search. This key role involves not only driving the consumption of our Elastic solutions by aligning them with customer business objectives but also onboarding customers, securing adoption, and facilitating expansion.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading and growing a team of CA&#39;s in EMEA Central region.</li>\n<li>Owning a full portfolio of Enterprise customers, you will be responsible for Renewal Rates, Customer Consumption, and a measure of Expansion.</li>\n<li>Leading and guiding the entire post-sales customer lifecycle, including onboarding, ongoing initiatives, and renewal phases.</li>\n<li>Strategizing with the direct sales organization on account planning, growth, and renewals.</li>\n<li>Acting as an escalation point for renewals strategy, critical account issues, and ongoing account planning.</li>\n<li>Accurately forecasting renewals and upsells.</li>\n<li>Providing leadership and vision to Global Leadership Team, spearheading a number of strategic initiatives.</li>\n</ul>\n<p>To be successful in this role, you will bring dynamic leadership skills, demonstrable experience leading and growing Customer Success teams, and a track record of developing close relationships with sales and other related organizations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d0aa9e42-473","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7738809","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Observability","Cybersecurity","Enterprise Search","Software Development life cycles","Project management skills","Big Data","Cloud","NoSql","Search","Logging products"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:34.302Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Germany"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Observability, Cybersecurity, Enterprise Search, Software Development life cycles, Project management skills, Big Data, Cloud, NoSql, Search, Logging products"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbd265ea-621"},"title":"Software Engineer, Workers Deploy & Config","description":"<p>Join the Workers Deploy &amp; Config team, the engine behind Cloudflare&#39;s unique serverless, edge-computing developer platform. This isn&#39;t just another backend role; you&#39;ll be building the critical, large-scale systems that empower developers worldwide to deploy everything - from a personal static site to full-stack applications serving millions of users.</p>\n<p>In fact, you&#39;ll be building the very foundation that the rest of our developer platform,from Pages to R2,is built upon. You will tackle the complex challenges of distributed systems and high-traffic APIs every single day. Your mission? To build and scale the platform that lets customers upload, configure, and manage their Workers, ensuring it&#39;s incredibly fast, extremely resilient, and scales effortlessly.</p>\n<p>You’ll drive projects from the initial idea to global release, delivering solutions at every layer of the stack. You’ll get to master a diverse and modern tech stack, writing high-performance Go, architecting APIs, optimizing storage interactions, building Workers with JavaScript/TypeScript, and managing it all on Kubernetes.</p>\n<p>We&#39;re looking for engineers who are obsessed with the developer experience and thrive on solving large-scale problems with a track record to prove it. If you care as much about the quality of the user&#39;s experience as you do about the quality of your code, and you want to join a high-impact, fast-growing team helping to build a better Internet, we want to talk to you.</p>\n<p>This role is about solving some of the most challenging problems in large scale, distributed systems. You&#39;ll be making a massive, direct impact on the broader developer community. Build &amp; Architect for Massive Scale - Own the core architecture of the Workers control plane, the system that deploys and configures millions of applications globally.</p>\n<p>Proactively identify and eliminate performance bottlenecks, re-architecting critical services to handle exponential growth. Design and implement resilient database schemas and read/write patterns built to support exponential platform growth and long-term usage.</p>\n<p>Evolve our services into a true developer platform, building the foundational capabilities that unlock future products.</p>\n<p>Drive for Extreme Performance &amp; Reliability - Obsess over the developer experience, with a relentless focus on reducing API latency and increasing API availability.</p>\n<p>Own the reliability of one of Cloudflare’s most critical, customer-facing systems. Take pride in production ownership by participating in an on-call rotation to ensure our platform is always on.</p>\n<p>Lead, Collaborate, &amp; Innovate - Partner directly with Product Managers and customers to translate complex problems into simple, elegant, and scalable solutions.</p>\n<p>Lead technical design from the ground up, collaborating with a brilliant, globally-distributed team of engineers.</p>\n<p>Act as a mentor and knowledge-sharer, leveling up the entire team.</p>\n<p>Constantly research, prototype, and introduce cutting-edge technologies to solve new classes of problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbd265ea-621","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7377424","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Strong experience using Go","Experience with Javascript and Typescript","Experience with metrics and observability tools such as Prometheus and Grafana","Experience with SQL and common relational database systems such as PostgreSQL","Experience with Kubernetes or similar deployment tools","Experience with distributed systems","Proven ability to drive projects independently, from concept to implementation – gathering requirements, writing technical specifications, implementing, testing, and releasing","Familiarity with implementing and consuming RESTful APIs"],"x-skills-preferred":["Experience with C++ or Rust","Experience scaling systems to meet increasing performance and usability demands","Experience working on a control and/or data plane","Experience using Cloudflare Workers or Pages","Experience working in frontend frameworks such as React","Experience managing interns or mentoring junior engineers","Product mindset and comfortable talking to customers and partners","Familiarity with GraphQL","Familiarity with RPC"],"datePosted":"2026-04-18T15:49:32.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strong experience using Go, Experience with Javascript and Typescript, Experience with metrics and observability tools such as Prometheus and Grafana, Experience with SQL and common relational database systems such as PostgreSQL, Experience with Kubernetes or similar deployment tools, Experience with distributed systems, Proven ability to drive projects independently, from concept to implementation – gathering requirements, writing technical specifications, implementing, testing, and releasing, Familiarity with implementing and consuming RESTful APIs, Experience with C++ or Rust, Experience scaling systems to meet increasing performance and usability demands, Experience working on a control and/or data plane, Experience using Cloudflare Workers or Pages, Experience working in frontend frameworks such as React, Experience managing interns or mentoring junior engineers, Product mindset and comfortable talking to customers and partners, Familiarity with GraphQL, Familiarity with RPC"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0dd8524d-7d1"},"title":"Security Software Engineer - Endpoint Security","description":"<p>We&#39;re seeking a Security Software Engineer to develop novel security tooling for securing embedded Linux systems and Android devices. The ideal candidate can develop, test, and debug an endpoint detection and response agent with mission-critical security responsibilities.</p>\n<p>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems. Implement an endpoint detection and response agent for use on Anduril products. Develop thorough testing and qualification procedures for security-critical components. Collaborate with cross-functional teams to identify specific security needs and implement solutions. Conduct code reviews and ensure adherence to security best practices. Stay updated on the latest security threats and technologies.</p>\n<p>Required qualifications include 2+ years of software development experience in Golang, Rust, or C/C++, experience with Linux observability and eBPF, strong understanding of Linux security internals, and experience debugging and optimizing performance of Linux software.</p>\n<p>Preferred qualifications include knowledge of security frameworks and compliance standards, experience in mobile development, specifically on Android platforms, experience implementing EDR tooling, and experience with SOC operations, forensics, and incident response practices.</p>\n<p>US Salary Range $166,000-$253,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0dd8524d-7d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5002801007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$166,000-$253,000 USD","x-skills-required":["Golang","Rust","C/C++","Linux observability","eBPF","Linux security internals","Debugging and optimizing performance of Linux software"],"x-skills-preferred":["Security frameworks and compliance standards","Mobile development","EDR tooling","SOC operations","Forensics","Incident response practices"],"datePosted":"2026-04-18T15:49:28.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Rust, C/C++, Linux observability, eBPF, Linux security internals, Debugging and optimizing performance of Linux software, Security frameworks and compliance standards, Mobile development, EDR tooling, SOC operations, Forensics, Incident response practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":253000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5e83c146-7a0"},"title":"Enterprise Account Executive - Digital Natives","description":"<p>We&#39;re looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion across Digital Natives within South Korea. You&#39;ll be the owner of a defined territory where you&#39;ll build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>\n<p>As an Enterprise Account Executive, you&#39;ll be responsible for developing and executing a proactive outbound cadence, deep discovery and qualification, value storytelling and demos, mutual deal strategy and forecast accuracy, executive negotiation and closing, domain and cloud acumen, and cross-functional partnership.</p>\n<p>This role requires proven SaaS quota-carrying success, expert discovery and qualification skills, compelling value storytellers, strong negotiation chops, technical and cloud fluency, and a collaborative mindset.</p>\n<p>If you&#39;re driven to build your own pipeline, master complex deal cycles, and help customers unlock the power of Search AI, we&#39;d love to talk.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5e83c146-7a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7457910","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SaaS quota-carrying success","Expert discovery and qualification skills","Compelling value storytellers","Strong negotiation chops","Technical and cloud fluency"],"x-skills-preferred":["Prior experience at an open-source or developer-centric infrastructure company","Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"],"datePosted":"2026-04-18T15:49:17.007Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seoul, South Korea"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"SaaS quota-carrying success, Expert discovery and qualification skills, Compelling value storytellers, Strong negotiation chops, Technical and cloud fluency, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4c51b67-4d1"},"title":"Manufacturing Engineering Manager, Fury","description":"<p>As a Manufacturing Engineering Manager, you will lead and manage a manufacturing engineering team to drive process improvements, ensure the highest quality of production, and support the development and implementation of new products. You will play a crucial role in optimising manufacturing processes, reducing costs, and increasing efficiencies on existing technologies while also developing new products and scaling to full-scale production.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the manufacturing engineering team building Fury from initial prototyping through large full-scale production</li>\n<li>Developing and executing strategic plans for production capabilities, incorporating new technologies and methodologies</li>\n<li>Interface with key external customers in communicating the strategic vision and plans for building group 5 aircraft from development to full rate production</li>\n<li>Owning documentation required for successfully manufacturing product group hardware at scale, such as work instructions, inspection plans &amp; requirements, etc.</li>\n<li>Collaborating with product development teams to ensure manufacturability of new designs and smooth transition from prototype to production</li>\n<li>Assisting in the selection, installation, and commissioning of new manufacturing equipment and processes, ensuring optimal integration into production lines</li>\n<li>Driving continuous improvement initiatives that result in cost reduction, quality enhancement, and throughput maximization</li>\n<li>Managing budgets and forecast future manufacturing needs, including personnel, equipment, and materials</li>\n<li>Establishing and maintaining robust training and documentation programs to enhance team skills and promote cross-functional knowledge sharing</li>\n<li>Fostering a culture of innovation, teamwork, and accountability within the manufacturing engineering department</li>\n<li>Serving as the technical expert and point of contact for manufacturing engineering issues, providing guidance and support to resolve complex problems</li>\n<li>Collaborating with cross-functional teams to ensure successful product launches, from initial design through to full-scale production</li>\n<li>Liaising with vendors and suppliers to ensure the timely delivery of equipment and materials and maintain quality standards</li>\n<li>Analysing production metrics and performance data to identify trends and opportunities for improvement</li>\n<li>Ensuring compliance with all safety, quality, and regulatory requirements within the manufacturing process</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4c51b67-4d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5065649007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$113,000-$149,000 USD","x-skills-required":["Manufacturing Engineering","Mechanical Engineering","Aerospace Engineering","Electrical Engineering","Industrial Engineering","Project Management","Leadership","Communication","Problem-Solving","Analysis"],"x-skills-preferred":["CAD Software","Modern Manufacturing Technologies","Automation","Low Observable Coating","Energetics","Mission Systems","Propulsion Manufacturing"],"datePosted":"2026-04-18T15:49:15.481Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ashville, Ohio, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Manufacturing Engineering, Mechanical Engineering, Aerospace Engineering, Electrical Engineering, Industrial Engineering, Project Management, Leadership, Communication, Problem-Solving, Analysis, CAD Software, Modern Manufacturing Technologies, Automation, Low Observable Coating, Energetics, Mission Systems, Propulsion Manufacturing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":113000,"maxValue":149000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18646b21-352"},"title":"Senior Enterprise Account Executive - W&B","description":"<p>At CoreWeave, we&#39;re looking for a Senior Enterprise Account Executive to join our team. As a quota-carrying, enterprise software sales position, you will be responsible for meeting and exceeding sales goals through generating and closing new opportunities while increasing awareness of Weights &amp; Biases in the marketplace.</p>\n<p>Your primary focus will be on driving new business and account expansion into the San Francisco/West Coast Enterprise territory. You will develop and implement a sales strategy aligned to regional and industry needs to help drive awareness, engagement, and growth. You will also collaborate with technology ecosystem and alliance partners to accelerate new opportunity discovery.</p>\n<p>As a Senior Enterprise Account Executive, you will manage opportunities through the sales cycle from initial inquiry/outbound interaction through to forecasted pipeline. You will meet quarterly and annual revenue objectives for the territory, while reporting on sales, activities, and progress on a regular basis through CRM and sales forecasting tools.</p>\n<p>We are looking for motivated, focused, and coachable sales professionals with experience across the full spectrum of the software sales cycle – prospecting, defining and articulating value proposition, pilot process management, business case development, negotiation, and closing.</p>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in B2B sales and/or account management roles</li>\n<li>Minimum of 7 years direct enterprise selling experience</li>\n<li>Track record of success in closing business</li>\n<li>Excellent negotiation, analytical, financial, and organizational capabilities</li>\n<li>Able to thrive in an evolving, entrepreneurial structure and environment</li>\n<li>Outstanding verbal and written communication skills</li>\n<li>Ability to work at both a tactical and strategic level</li>\n<li>Must possess a can-do, self-starter mentality in a highly collaborative atmosphere</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience selling developer tools/technical platforms/observability tools to builders (developers/engineering/platform/DevOps/data/AI/ML)</li>\n<li>Experience selling to AI/ML leaders and builders</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18646b21-352","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650861006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$130,000 to $160,000","x-skills-required":["B2B sales","account management","software sales cycle","negotiation","analytical skills","financial skills","organizational skills","communication skills"],"x-skills-preferred":["developer tools","technical platforms","observability tools","AI/ML leadership","AI/ML sales"],"datePosted":"2026-04-18T15:49:10.999Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"B2B sales, account management, software sales cycle, negotiation, analytical skills, financial skills, organizational skills, communication skills, developer tools, technical platforms, observability tools, AI/ML leadership, AI/ML sales","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":160000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2d198020-3d5"},"title":"Sr. Engineer, Storage","description":"<p>The Storage Engine Team at CoreWeave is responsible for the product capabilities and data plane function of CoreWeave&#39;s managed storage products. We build reliable, scalable storage solutions with segment leading performance. Storage engine works with engineering teams across infrastructure, compute, and platform to ensure our storage services meet the needs of the world&#39;s most demanding AI workloads.</p>\n<p>The primary responsibilities of this role include designing and implementing distributed storage solutions to support scaling data-intensive AI workloads, contributing to the development of exabyte-scale, S3-compatible object storage, and integrating dedicated storage clusters into diverse customer environments. Additionally, the successful candidate will work with technologies such as RDMA, GPU Direct Storage, and distributed filesystems protocols such as NFS or FUSE to optimize storage performance and efficiency.</p>\n<p>Key responsibilities also include leading efforts to improve the reliability, durability, security, and observability of our storage stack, collaborating with operations teams to monitor, troubleshoot, and improve storage systems in production environments, setting the bar for developing metrics and dashboards to provide visibility into storage performance and health, analyzing telemetry and system data to drive improvements in throughput, latency, and resilience, and working cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack.</p>\n<p>A key aspect of this role is sharing knowledge and mentoring other engineers on best practices in building distributed, high-performance systems.</p>\n<p>To be successful in this role, the ideal candidate will have a strong background in storage systems engineering or infrastructure, with a minimum of 8-10 years of experience. They will also have hands-on experience with object storage or distributed filesystems in production environments, as well as proficiency in a systems programming language such as Go, C, or Rust. Additionally, they will have experience working with cloud-native infrastructure, Kubernetes, and scalable system architectures, and familiarity with storage observability tools and telemetry pipelines.</p>\n<p>If you&#39;re a motivated and experienced engineer looking to join a dynamic team and contribute to the development of cutting-edge storage solutions, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2d198020-3d5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4664429006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["storage systems engineering","infrastructure","object storage","distributed filesystems","RDMA","GPU Direct Storage","NFS","FUSE","cloud-native infrastructure","Kubernetes","scalable system architectures","storage observability tools","telemetry pipelines"],"x-skills-preferred":["Go","C","Rust","distributed systems","high-performance systems","storage performance and efficiency"],"datePosted":"2026-04-18T15:49:07.662Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"storage systems engineering, infrastructure, object storage, distributed filesystems, RDMA, GPU Direct Storage, NFS, FUSE, cloud-native infrastructure, Kubernetes, scalable system architectures, storage observability tools, telemetry pipelines, Go, C, Rust, distributed systems, high-performance systems, storage performance and efficiency","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d1caf46-d74"},"title":"Security Software Engineer - Endpoint Security","description":"<p>We&#39;re seeking a Security Software Engineer to develop novel security tooling for securing embedded Linux systems and Android devices. The ideal candidate can develop, test, and debug an endpoint detection and response agent with mission-critical security responsibilities.</p>\n<p>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems. Implement an endpoint detection and response agent for use on Anduril products. Develop thorough testing and qualification procedures for security-critical components. Collaborate with cross-functional teams to identify specific security needs and implement solutions. Conduct code reviews and ensure adherence to security best practices. Stay updated on the latest security threats and technologies.</p>\n<p>Required qualifications:</p>\n<ul>\n<li>2+ years of software development experience in some combination of Golang, Rust, or C/C++.</li>\n<li>Experience with Linux observability and eBPF.</li>\n<li>Strong understanding of Linux security internals.</li>\n<li>Experience debugging and optimizing performance of Linux software.</li>\n<li>Experience with CI/CD and test automation, including for mobile and embedded devices.</li>\n<li>Solid understanding of cybersecurity principles and practices.</li>\n<li>Ability to obtain and hold a U.S. Secret security clearance.</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Knowledge of security frameworks and compliance standards.</li>\n<li>Experience in mobile development, specifically on Android platforms.</li>\n<li>Experience implementing EDR tooling.</li>\n<li>Experience with SOC operations, forensics, and incident response practices.</li>\n<li>Strong problem-solving and analytical skills.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p>US Salary Range $166,000-$253,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d1caf46-d74","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5086960007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$166,000-$253,000 USD","x-skills-required":["Golang","Rust","C/C++","Linux observability","eBPF","Linux security internals","CI/CD","test automation","cybersecurity principles","U.S. Secret security clearance"],"x-skills-preferred":["security frameworks","compliance standards","mobile development","EDR tooling","SOC operations","forensics","incident response practices"],"datePosted":"2026-04-18T15:48:57.295Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Rust, C/C++, Linux observability, eBPF, Linux security internals, CI/CD, test automation, cybersecurity principles, U.S. Secret security clearance, security frameworks, compliance standards, mobile development, EDR tooling, SOC operations, forensics, incident response practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":253000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1e275c7d-4a3"},"title":"Staff Systems Engineer, Identity","description":"<p>As a Staff Systems Engineer, Identity, you will serve as the primary technical owner of CoreWeave&#39;s enterprise identity ecosystem, with a focus on Okta and Opal. You will design, build, and operate identity lifecycle systems that are secure, automated, and scalable.</p>\n<p>This is a highly visible, high-impact role where identity sits at the center of security. You will define how access is granted, changed, and removed across the organization, enabling business velocity while enforcing least privilege and strong governance.</p>\n<p>Key responsibilities include:</p>\n<p><strong>Design and scale enterprise identity architecture that minimizes access sprawl and enforces least privilege</strong></p>\n<p><strong>Own and improve Joiner, Mover, and Leaver (JML) lifecycle processes across all critical systems</strong></p>\n<p><strong>Build and operate identity governance and administration (IGA) capabilities including birthright access models, role-based access control (RBAC), approval workflows and policy enforcement, access reviews and certification processes</strong></p>\n<p><strong>Administer and enhance Okta capabilities (SSO, MFA, adaptive policies, lifecycle management, SCIM, integrations)</strong></p>\n<p><strong>Build and scale access request workflows in Opal and integrated systems</strong></p>\n<p><strong>Integrate new applications into the identity ecosystem (SAML, OIDC, SCIM, role mapping)</strong></p>\n<p><strong>Develop automation and infrastructure-as-code to improve reliability and reduce manual effort</strong></p>\n<p><strong>Partner with Security to strengthen identity as a core control plane (Zero Trust, authentication, authorization)</strong></p>\n<p><strong>Align identity systems with PeopleOps and organizational changes</strong></p>\n<p><strong>Monitor and improve identity system health, observability, and performance</strong></p>\n<p><strong>Troubleshoot complex authentication, provisioning, and authorization issues</strong></p>\n<p><strong>Maintain documentation, runbooks, and architectural standards</strong></p>\n<p><strong>Serve as an escalation point for identity-related incidents</strong></p>\n<p><strong>Drive continuous improvement in identity architecture, governance, and user experience</strong></p>\n<p>Requirements include:</p>\n<p><strong>7–10+ years of experience in IT systems engineering, identity engineering, or systems architecture</strong></p>\n<p><strong>Deep hands-on experience with Okta in a complex enterprise environment</strong></p>\n<p><strong>Strong expertise in identity and access concepts (SSO, MFA, SAML, OAuth, OIDC, SCIM, RBAC, Zero Trust)</strong></p>\n<p><strong>Proven experience designing lifecycle automation (JML) and access governance frameworks</strong></p>\n<p><strong>Experience with IGA or access request platforms such as Opal</strong></p>\n<p><strong>Strong automation and infrastructure-as-code experience (Terraform, APIs, Python/PowerShell/Golang)</strong></p>\n<p><strong>Ability to integrate enterprise applications into centralized identity platforms</strong></p>\n<p><strong>Strong troubleshooting skills across identity, federation, and provisioning systems</strong></p>\n<p><strong>Excellent communication skills with the ability to influence cross-functional stakeholders</strong></p>\n<p>Preferred qualifications include familiarity with Active Directory, Entra ID, HRIS systems, and SaaS ecosystems, experience building identity observability and reporting, and relevant certifications (Okta, cloud, or security).</p>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<p><strong>Be Curious at Your Core</strong></p>\n<p><strong>Act Like an Owner</strong></p>\n<p><strong>Empower Employees</strong></p>\n<p><strong>Deliver Best-in-Class Client Experiences</strong></p>\n<p><strong>Achieve More Together</strong></p>\n<p>Why This Role Matters</p>\n<p>Identity is one of the most critical control planes in a modern enterprise. In this role, you will define how secure access is managed across CoreWeave, ensuring identity remains a foundational pillar of security, compliance, and scale.</p>\n<p>The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, ability to participate in employee stock purchase program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1e275c7d-4a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4668575006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"Base salary range: $188,000 to $275,000","x-skills-required":["Okta","Opal","identity lifecycle systems","security","automation","infrastructure-as-code","Terraform","APIs","Python","PowerShell","Golang","identity and access concepts","SSO","MFA","SAML","OAuth","OIDC","SCIM","RBAC","Zero Trust","lifecycle automation","access governance frameworks","IGA","access request platforms","SaaS ecosystems","Active Directory","Entra ID","HRIS systems"],"x-skills-preferred":["identity observability and reporting","relevant certifications","cloud"],"datePosted":"2026-04-18T15:48:46.456Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA /Dallas, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Okta, Opal, identity lifecycle systems, security, automation, infrastructure-as-code, Terraform, APIs, Python, PowerShell, Golang, identity and access concepts, SSO, MFA, SAML, OAuth, OIDC, SCIM, RBAC, Zero Trust, lifecycle automation, access governance frameworks, IGA, access request platforms, SaaS ecosystems, Active Directory, Entra ID, HRIS systems, identity observability and reporting, relevant certifications, cloud","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67b4ccd7-51d"},"title":"Senior Software Engineer, Observability Insights","description":"<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>\n<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>\n<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>\n<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>\n<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>\n<p><strong>About the role</strong></p>\n<ul>\n<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>\n</ul>\n<ul>\n<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>\n</ul>\n<ul>\n<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>\n</ul>\n<ul>\n<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>\n</ul>\n<ul>\n<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>\n</ul>\n<ul>\n<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>\n</ul>\n<ul>\n<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>\n</ul>\n<ul>\n<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>\n</ul>\n<ul>\n<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>\n</ul>\n<ul>\n<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast!</p>\n<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>\n<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67b4ccd7-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650163006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["software engineering","infrastructure engineering","backend systems","distributed APIs","reliability engineering","fault-tolerant design","SLOs","error budgets","multi-tenant system resilience","observability systems","ClickHouse","Loki","VictoriaMetrics","Prometheus","Grafana","agentic applications","LLM-based features","grounding","tool calling","operational safety","Go","Python","Kubernetes","logging","tracing","metrics platforms","cardinality","indexing","query optimization","event streaming","data pipeline management","LLM frameworks","MCP","agent tooling"],"x-skills-preferred":["operating Kubernetes clusters"],"datePosted":"2026-04-18T15:48:46.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f4ab428-1e7"},"title":"Security Technology Deployment Specialist","description":"<p>As a Security Technology Deployment Specialist at Anthropic, you will own the validation, standardization, and deployment of physical security technology across our rapidly expanding global office portfolio. This role bridges the gap between technology selection and production-ready operation , ensuring that every security platform deployed is rigorously tested, properly integrated with enterprise infrastructure, fully documented, and built for scale.</p>\n<p>You&#39;ll define the installation standards, configuration baselines, and deployment processes that the broader team executes against , from access control migrations and intercom replacements to AI analytics onboarding and new application integrations. You&#39;ll work across InfoSec, IT, Networking, and Identity Management to ensure every security application passes review, integrates with SSO, and is supported within Anthropic&#39;s infrastructure before going live. Your work will directly determine whether Anthropic&#39;s security technology stack scales reliably as the company grows from dozens of locations to a global enterprise footprint.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Validate and deploy new and replacement security technology platforms including access control systems, intercom solutions, video management, visitor management, and AI/analytics tools across all Anthropic locations</li>\n</ul>\n<ul>\n<li>Build and maintain staging environments for pre-production testing and validation of all security applications, hardware, firmware, and system configurations</li>\n</ul>\n<ul>\n<li>Define installation standards, configuration baselines, licensing structures, update procedures, and maintenance requirements for every deployed security platform</li>\n</ul>\n<ul>\n<li>Deploy integrations between security applications, validating that platforms communicate and share data correctly before transitioning to production</li>\n</ul>\n<ul>\n<li>Support colleagues&#39; security applications through InfoSec review processes, ensuring new tools meet Anthropic&#39;s information security and compliance requirements</li>\n</ul>\n<ul>\n<li>Coordinate SSO integration for newly deployed security applications with Identity Management and IT teams</li>\n</ul>\n<ul>\n<li>Transition applications requiring custom integration or data pipeline development to the IT Engineering team with documented technical requirements for roadmap inclusion</li>\n</ul>\n<ul>\n<li>Initiate onboarding of deployed hardware and systems into Anthropic&#39;s health monitoring platform to ensure operational visibility from day one</li>\n</ul>\n<ul>\n<li>Develop standardized deployment playbooks, checklists, configuration templates, and handoff documentation that enable repeatable installations across all current and future sites</li>\n</ul>\n<ul>\n<li>Evaluate security platforms for scalability, identifying capacity constraints, single points of failure, and architectural limitations before they impact operations at scale</li>\n</ul>\n<ul>\n<li>Coordinate with Networking, IT Infrastructure, and Facilities teams to ensure all infrastructure prerequisites (network, power, rack space, cloud resources) are met prior to deployment</li>\n</ul>\n<ul>\n<li>Execute structured handoffs to Project Management (for site programming), Break-Fix Support (for maintenance), and Access Control Administration (for ongoing system management), ensuring each team has the standards and documentation to execute independently</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of hands-on experience deploying, validating, and managing enterprise physical security technology across a large or rapidly growing organization</li>\n</ul>\n<ul>\n<li>Experience working across InfoSec, IT, Networking, and Identity Management teams to onboard and integrate security applications into enterprise environments</li>\n</ul>\n<ul>\n<li>Strong technical communication skills, with the ability to define standards clearly enough that PMs, integrators, and service teams execute against them without ambiguity</li>\n</ul>\n<ul>\n<li>Experience with IP networking, VLANs, PoE, and infrastructure requirements for security devices</li>\n</ul>\n<ul>\n<li>Comfortable with 25% travel for site deployments, commissioning, and validation</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Previous experience at a hyper-growth technology company or managing security technology programs for high-profile corporate environments</li>\n</ul>\n<ul>\n<li>Experience with Anthropic&#39;s specific technology stack: Genetec Security Center, Axis cameras, Wavelynx, Commend Symphony Cloud, Alcatraz.ai, Ambient.ai, SureView, Envoy</li>\n</ul>\n<ul>\n<li>Industry certifications: Genetec, Axis, CCNA, PSP, CPP, or PMP</li>\n</ul>\n<ul>\n<li>Experience with OSDP, modern credential technologies, and encryption protocols for physical security systems</li>\n</ul>\n<ul>\n<li>Familiarity with scripting or automation (Python, PowerShell) for configuration management and deployment automation</li>\n</ul>\n<ul>\n<li>Experience with health monitoring and observability platforms</li>\n</ul>\n<ul>\n<li>Experience with change management, configuration control, and version-controlled infrastructure documentation</li>\n</ul>\n<p>Salary Range: $175,000-$220,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f4ab428-1e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5123587008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000-$220,000 USD","x-skills-required":["security technology deployment","physical security technology","access control systems","intercom solutions","video management","visitor management","AI/analytics tools","InfoSec","IT","Networking","Identity Management","SSO integration","custom integration","data pipeline development","health monitoring platform","deployment playbooks","checklists","configuration templates","handoff documentation","scalability analysis","infrastructure prerequisites","structured handoffs"],"x-skills-preferred":["Genetec Security Center","Axis cameras","Wavelynx","Commend Symphony Cloud","Alcatraz.ai","Ambient.ai","SureView","Envoy","OSDP","modern credential technologies","encryption protocols","scripting","automation","Python","PowerShell","health monitoring","observability platforms","change management","configuration control","version-controlled infrastructure documentation"],"datePosted":"2026-04-18T15:48:43.816Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security technology deployment, physical security technology, access control systems, intercom solutions, video management, visitor management, AI/analytics tools, InfoSec, IT, Networking, Identity Management, SSO integration, custom integration, data pipeline development, health monitoring platform, deployment playbooks, checklists, configuration templates, handoff documentation, scalability analysis, infrastructure prerequisites, structured handoffs, Genetec Security Center, Axis cameras, Wavelynx, Commend Symphony Cloud, Alcatraz.ai, Ambient.ai, SureView, Envoy, OSDP, modern credential technologies, encryption protocols, scripting, automation, Python, PowerShell, health monitoring, observability platforms, change management, configuration control, version-controlled infrastructure documentation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":220000,"unitText":"YEAR"}}}]}