<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>978310df-422</externalid>
      <Title>Staff FullStack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our International Public Sector team. As a Full Stack Software Engineer, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You will serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation.</p>
<p>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes.</p>
<p>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments.</p>
<p>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases.</p>
<p>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture.</p>
<p>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team.</p>
<p>Ideally, you&#39;d have a Master&#39;s or PhD in Computer Science or equivalent deep industry experience in architecting complex, distributed systems.</p>
<p>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP).</p>
<p>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability.</p>
<p>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments.</p>
<p>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization.</p>
<p>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction.</p>
<p>Proficient in Arabic.</p>
<p>Nice to haves include past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role, experience working cross functionally with operations, and a proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, LLMs, Cloud computing, Security, Scalability, Distributed systems, Arabic, Startup experience, CTO experience, Founding engineer experience, Forward deployed engineer experience, Customer engineer experience, Operations experience, LLM-driven solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673314005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61e346b2-915</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>
<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be1fd1e-8f3</externalid>
      <Title>Principal Architect</Title>
      <Description><![CDATA[<p>We are seeking a Principal Architect to drive the design, development, and deployment of our agentic AI products in a fast-paced, collaborative environment. In this role, you will lead a team of 50+ engineers, providing both strategic and technical guidance. You’ll be responsible for high-impact architectural decisions, cross-company collaboration, and executive level engagements.</p>
<p>Key Responsibilities: Lead and mentor a high-performing engineering team of 50+, fostering a culture of technical excellence and ownership. Guide your team through complex challenges involving LLMs, AI agents, and large-scale distributed systems. Represent Scale AI in high-stakes negotiations and strategic discussions with senior external partners, demonstrating strong technical competence and credibility. Develop and communicate a compelling vision for Scale AI’s technology applied to your program. Provide regular updates to senior leadership and key stakeholders on progress, risks, and opportunities. Foster a culture of speed, unity of purpose, resilience, and teamwork.</p>
<p>Requirements: 10+ years of software engineering experience, including 5+ years in a technical leadership or staff role. Deep understanding of modern AI/ML technologies, including experience working with LLMs and AI agents. Proficient in one or more modern programming languages (Python, JavaScript/TypeScript). Hands-on experience with Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Strong product and business sense, with a track record of aligning engineering efforts with company goals. Ability to operate effectively in ambiguous, fast-changing environments and guide your team to do the same. Experience in executive level engagement with industry partners and Public Sector customers</p>
<p>Success Metrics: Within 6 months: Successful demonstration of agentic AI’s mission value in high-stakes customer demonstrations Establish Scale AI as the preferred agentic AI partner for the PEO Establish high velocity, agile engineering cadence both internally and with our industry partners</p>
<p>Within 12–18 months: Secure follow-on contract award with expanded scope for Scale Position Scale AI as the global AI leader in this mission area Establish developed solutions as Scale product offerings to deliver on future contracts</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$257,000-$321,000 USD</Salaryrange>
      <Skills>software engineering, technical leadership, AI/ML technologies, LLMs, AI agents, Kubernetes, cloud infrastructure, Python, JavaScript/TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599202005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>acd28d99-495</externalid>
      <Title>Manager, Sales Development - EMEA</Title>
      <Description><![CDATA[<p>As a Sales Development Manager for EMEA at Anthropic, you will lead and scale our business development function across Europe, the Middle East, and Africa. You will build and manage a team of 6-8 BDRs primarily in Dublin. This role requires exceptional agility, cultural fluency across diverse European markets, and the ability to develop segment-specific strategies while navigating complex regulatory environments and regional nuances. You will be instrumental in establishing Anthropic&#39;s regional presence and building the foundation for long-term growth in EMEA.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build, lead, and scale a team of 6-8 BDRs across EMEA markets including SEU, NEU, and DACH.</li>
<li>Develop and execute region-specific prospecting strategies that account for local market dynamics, cultural nuances, and competitive landscapes across diverse European markets</li>
<li>Support all sales segments (Startups, Commercial, Enterprise) with agility to shift resources based on regional opportunities</li>
<li>Partner with regional AEs and sales leadership to align pipeline generation with territory plans and revenue targets</li>
<li>Establish KPIs and tracking mechanisms that account for regional differences while maintaining global consistency</li>
<li>Create localized training programs and enablement materials that resonate with diverse European business cultures</li>
<li>Build and maintain relationships with regional marketing teams to optimize lead quality and campaign effectiveness</li>
<li>Own regional Pipeline Reviews with sales leadership covering market-specific insights and growth opportunities</li>
<li>Navigate complex hiring and employment regulations across multiple European countries, partnering with HR and Legal</li>
<li>Coach and develop BDRs on region-specific prospecting techniques and career progression</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>3-6 years of experience managing sales development or inside sales teams in EMEA</li>
<li>Proven track record of growing and scaling teams across multiple European countries/offices</li>
<li>Experience managing distributed teams across different time zones and cultures within EMEA</li>
<li>Strong understanding of business practices, sales cycles, and decision-making processes in key EMEA markets</li>
<li>Experience adapting global sales processes for European markets while maintaining consistency</li>
<li>Strong analytical skills with ability to identify and act on regional market opportunities</li>
<li>Experience with Salesforce and sales technology stack</li>
<li>Excellent communication skills with ability to operate effectively across European cultures</li>
<li>Bachelor&#39;s degree or equivalent work experience</li>
</ul>
<p>Preferred Experience:</p>
<ul>
<li>Experience at US-headquartered technology companies expanding in EMEA</li>
<li>Background in AI/ML, cloud infrastructure, or developer platforms</li>
<li>Track record of building BDR/SDR functions from scratch in new European markets</li>
<li>Experience managing both velocity (Startup/Commercial) and strategic (Enterprise) sales motions</li>
<li>Fluency in German, French, Spanish or other major European languages</li>
<li>Network of talent for BDR hiring across EMEA markets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€170.000-€225.000 EUR</Salaryrange>
      <Skills>Sales Development, Team Management, Strategic Planning, Market Analysis, Communication, Sales Technology Stack, Analytical Skills, AI/ML, Cloud Infrastructure, Developer Platforms, Fluency in European Languages</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5121912008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dab43521-cfa</externalid>
      <Title>Software Engineer, Robotics &amp; Autonomous Systems</Title>
      <Description><![CDATA[<p>In this role, you&#39;ll be a key contributor building production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and systems for robotics data collection, teleoperation, and model evaluation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Building real-time systems for robotic control, sensor fusion, and perception pipelines</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
<li>Collaborating with ML engineers and researchers to bring robotics research into production</li>
<li>Delivering features at high velocity while maintaining system reliability and performance</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li>3+ years of software engineering experience in robotics, autonomous vehicles, or related fields</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Practical experience with robotics frameworks (ROS/ROS2), simulation environments, or AV systems</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>Python, TypeScript, Node.js, React, C++, ROS/ROS2, simulation environments, AV systems, distributed systems, workflow orchestration, cloud infrastructure, databases, data processing, robotics hardware platforms, computer vision, SLAM, motion planning, imitation learning, autonomous vehicle data, lidar technologies, 3D data processing, ML model deployment, serving frameworks, teleoperation systems, VR interfaces, workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4618065005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>15092e66-444</externalid>
      <Title>Strategic Account Executive, GSI</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Strategic Account Executive on the GSI team, you&#39;ll own a named book of accounts and the full revenue outcome for each. You&#39;ll develop a point of view on where Claude creates the most value across a firm&#39;s practice areas, advisory services, delivery teams, and internal operations, build relationships with the partners and executives who sponsor transformation at that scale, and expand the partnership well beyond the original buyer.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own all revenue outcomes for a named book of GSI accounts, driving both new logo acquisition and multi-practice expansion through complex, multi-quarter sales cycles involving partner-led approval, global procurement, and custom commercial terms</li>
<li>Develop a clear thesis for each priority firm , where Claude creates value across knowledge management, advisory workflows, deliverable generation, and client engagements , and execute a sequenced engagement plan across practices, regions, and stakeholders</li>
<li>Build and independently advance executive relationships with Managing Partners, Practice Leads, MDs, CIOs, CTOs, and Heads of AI/Digital, anchoring every conversation to their strategic priorities: utilization, leverage, realization, and billable productivity</li>
<li>Proactively create demand in unengaged practice areas and regions, using early wins as proof points to open new doors across decentralized, partner-led organizations</li>
<li>Build quantified, firm-specific business cases mapped to the GSI operating model , using their own language and metrics , that shape deals rather than justify them after the fact</li>
<li>Identify and close lighthouse partnerships that become references across the GSI landscape and set up the future sell-with motion</li>
<li>Partner cross-functionally with Product, Applied AI, Engineering, and Partnerships to inform the roadmap based on GSI buyer needs, and contribute to the playbook, proof points, and commercial structures that become the repeatable GSI motion</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of enterprise software sales experience with a track record of owning named accounts at large, complex, partner-led organizations (global SIs, strategy consultancies), managing multi-quarter sales cycles through technical evaluations, partner-led approval, and global procurement</li>
<li>Demonstrated ability to independently build and advance relationships at the Partner, MD, and C-suite level , including practice leadership and innovation/digital executives , and hold credible conversations across both technical and business audience</li>
<li>Experience building firm-specific business cases grounded in the firm&#39;s own operating metrics (utilization, leverage, realization, margin) and defending commercial terms through complex negotiations</li>
<li>Background selling platform, API, cloud infrastructure, or emerging technology into enterprises evaluating a new category</li>
<li>Genuine interest in AI and strong alignment with Anthropic&#39;s mission of responsible AI development</li>
<li>A history of growing accounts meaningfully beyond the original engagement by proactively creating demand across new practice areas, regions, and use cases</li>
</ul>
<p><strong>What Will Make You Stand Out</strong></p>
<ul>
<li>Direct experience selling into Global SI’s or strategy consultancies, and fluency in how partner-led firms operate and measure success</li>
<li>Experience as an early AE in a vertical or segment, where you helped build the sales motion rather than inherit it</li>
<li>Background selling developer platforms, cloud infrastructure, or AI/ML tooling into traditional partner-led services firms</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise software sales, Named account ownership, Complex sales cycles, Partner-led approval, Global procurement, Firm-specific business cases, Commercial terms negotiation, Platform, API, cloud infrastructure, or emerging technology sales, AI interest and alignment with Anthropic&apos;s mission, Direct experience selling into Global SI’s or strategy consultancies, Experience as an early AE in a vertical or segment, Background selling developer platforms, cloud infrastructure, or AI/ML tooling</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that develops artificial intelligence systems. It has a team of researchers, engineers, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5176036008</Applyto>
      <Location>New York City, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6fc00c5-564</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Software Engineer to join our Robotics business unit, focused on solving the data bottleneck in Physical AI across Robotics, Autonomous Vehicles, and Computer Vision. As a key contributor, you&#39;ll own and architect large-scale data processing pipelines, build ML training and fine-tuning pipelines, and develop tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation.</p>
<p>In this role, you&#39;ll interact directly with robotics and AV stakeholders to understand their technical needs and drive product development. You&#39;ll also design comprehensive monitoring and evaluation frameworks for robotics models and data quality, and collaborate with ML engineers and researchers to bring robotics research into production.</p>
<p>To succeed, you&#39;ll need at least 6 years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems. You should have strong programming skills in Python and TypeScript/Node.js for production systems, experience with React and modern frontend development for 3D interfaces, and concurrent and real-time systems expertise.</p>
<p>We&#39;re looking for someone who can deliver features at high velocity while maintaining system reliability and performance, and has a track record of working with cross-functional teams including ML engineers, researchers, and customers. Strong communication skills and the ability to operate with high autonomy are essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/Node.js, React, Concurrent and real-time systems, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing at large scale, C++, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment and serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4612282005</Applyto>
      <Location>Argentina; Uruguay</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d4c3fc5-2ed</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad5c420d-b2d</externalid>
      <Title>Senior Solutions Architect - Lakebase</Title>
      <Description><![CDATA[<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>
<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>
<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>
<p>The Lakebase product-line requires the following core technical competencies:</p>
<ul>
<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>
<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>
<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>
<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>
</ul>
<p>Impact</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>
<p>Competencies &amp; Responsibilities</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>
<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407181002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc62d58e-581</externalid>
      <Title>International Readiness Lead</Title>
      <Description><![CDATA[<p>As International Readiness Lead, you&#39;ll drive the cross-functional work that makes Claude deployable, compliant, and commercially viable in Anthropic&#39;s priority markets. You&#39;ll contribute to Anthropic&#39;s international compute strategy, develop a framework for evaluating and sequencing data residency and sovereign deployment requests, and identify and document international customer requirements for product localization.</p>
<p>You&#39;ll translate infrastructure and product capabilities into commercial propositions, partnering with Sales and Marketing to ensure international enterprise and government customers understand what Anthropic can deliver, and when. You&#39;ll serve as the internal subject matter expert on international readiness requirements, advising on deals, partnerships, and policy positions as they arise.</p>
<p>You&#39;ll build scalable processes for capturing, triaging, and acting on international product feedback so it doesn’t get lost in HQ product cycles. You&#39;ll serve as the GTM strategist for Anthropic’s mission-oriented international programs, including our approach to responsible AI deployment in democratic allied nations and our strategy for expanding access and affordability in Global South markets.</p>
<p>You&#39;ll partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation, not just a policy framework. You&#39;ll track and synthesise the competitive landscape for sovereign AI and national AI programs, surfacing implications for Anthropic’s positioning and commercial strategy.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute to Anthropic’s international compute strategy</li>
<li>Develop a framework for evaluating and sequencing data residency and sovereign deployment requests</li>
<li>Identify and document international customer requirements for product localization</li>
<li>Translate infrastructure and product capabilities into commercial propositions</li>
<li>Serve as the internal subject matter expert on international readiness requirements</li>
<li>Build scalable processes for capturing, triaging, and acting on international product feedback</li>
<li>Serve as the GTM strategist for Anthropic’s mission-oriented international programs</li>
<li>Partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–7 years in product, technical GTM, solutions engineering, or strategy roles with meaningful international scope</li>
<li>Strong working knowledge of cloud infrastructure, data residency frameworks, and enterprise compliance requirements</li>
<li>Experience working with or selling to government customers or regulated enterprises</li>
<li>Ability to synthesise complex technical, regulatory, and geopolitical constraints into clear commercial and strategic recommendations</li>
<li>Comfortable building internal processes from scratch</li>
<li>High autonomy and strong written communication</li>
<li>Direct experience with sovereign cloud programs, regulated data environments, or government AI initiatives is a plus</li>
<li>Familiarity with EU AI Act, India DPDP Act, or similar regulatory frameworks shaping enterprise AI deployment internationally is a plus</li>
<li>Experience at a hyperscaler, cloud provider, or enterprise SaaS company navigating international infrastructure decisions is a plus</li>
<li>An interest in the intersection of AI, democratic governance, and responsible technology deployment is a plus</li>
<li>Annual salary: £120,000-£170,000 GBP</li>
</ul>
<p>$190,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£120,000-£170,000 GBP
$190,000-$270,000 USD</Salaryrange>
      <Skills>Cloud infrastructure, Data residency frameworks, Enterprise compliance requirements, Government customers, Regulated enterprises, Complex technical, regulatory, and geopolitical constraints, Commercial and strategic recommendations, Internal processes, High autonomy, Strong written communication, Sovereign cloud programs, Regulated data environments, Government AI initiatives, EU AI Act, India DPDP Act, Hyperscalers, Cloud providers, Enterprise SaaS companies, International infrastructure decisions, AI, democratic governance, and responsible technology deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151939008</Applyto>
      <Location>London, UK; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef6605f2-fe0</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Robotics business unit. As a key contributor, you&#39;ll build production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
</ul>
<p>Ideal candidates will have:</p>
<ul>
<li>3+ years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Concurrent and real-time systems, with special attention to timing constraints</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at large scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, Node.js, C++, React, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment, Serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4655050005</Applyto>
      <Location>Mexico City, MX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ff592ac-9d8</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Inference team, responsible for building and maintaining critical systems that serve Claude to millions of users worldwide. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models.</p>
<p>As a Senior Software Engineer, you will be responsible for designing, implementing, and deploying large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing. You will work closely with our research team to develop new inference features and integrate new AI accelerator platforms.</p>
<p>To succeed in this role, you should have significant software engineering experience, particularly with distributed systems, and be results-oriented with a bias towards flexibility and impact. You should also be able to pick up slack, even if it goes outside your job description, and thrive in environments where technical excellence directly drives both business results and research breakthroughs.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing</li>
<li>Work closely with our research team to develop new inference features and integrate new AI accelerator platforms</li>
<li>Collaborate with cross-functional teams to ensure seamless deployment and operation of our systems</li>
<li>Analyze observability data to tune performance based on real-world production workloads</li>
<li>Manage multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree or equivalent combination of education, training, and/or experience</li>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Thrives in environments where technical excellence directly drives both business results and research breakthroughs</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Familiarity with machine learning systems and infrastructure</li>
<li>Strong communication and collaboration skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>Distributed systems, Kubernetes, Cloud infrastructure, Machine learning systems, Infrastructure engineering, Python, Rust, Java, C++, Go</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ba73370-831</externalid>
      <Title>Internal Audit IT Manager</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Internal Audit IT Manager, you will own end-to-end delivery of complex IT and security audits across our cloud infrastructure, security operations, and crypto-native systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end delivery of IT and security audits, from risk assessment and scoping through planning, fieldwork, testing, reporting, and issue validation,covering cloud infrastructure (AWS, GCP), security operations, identity and access management, data protection, IT asset management, vendor/third-party risk, and key in-scope products and services including blockchain infrastructure, centralized and self-hosted wallets, and cold storage.</li>
</ul>
<ul>
<li>Driving AI-enabled audit execution, designing and implementing data analytics, automation, and Generative AI solutions to modernize how we audit (e.g., continuous monitoring, anomaly detection, automated evidence retrieval, AI-assisted workpaper drafting),while maintaining rigorous human-in-the-loop validation to ensure accuracy and audit-quality conclusions.</li>
</ul>
<ul>
<li>Executing audits aligned with the multi-year IT and security audit roadmap, coordinating coverage with co-sourced partners and cross-functional risk initiatives while ensuring alignment with Coinbase&#39;s enterprise risk profile, technology strategy, and regulatory expectations across regions (US, EMEA, APAC).</li>
</ul>
<ul>
<li>Driving high-quality, risk-based findings and executive-level reporting, distilling key themes, emerging risks, and root causes into clear, concise materials for senior management and the Chief Audit Executive,ensuring findings are appropriately documented and supported by evidence.</li>
</ul>
<ul>
<li>Partnering with technology and security leadership across Engineering, Security, Infrastructure, Product, and Operations to build trusted relationships, challenge control design, and advise on pragmatic, risk-based, scalable remediation while maintaining third-line independence.</li>
</ul>
<ul>
<li>Driving disciplined issue management, ensuring timely, risk-based remediation by management, high-quality root cause analysis, and validation of remediation activities,escalating delays or thematic concerns to senior leadership as needed.</li>
</ul>
<ul>
<li>Evaluating and developing talent, assessing candidates and helping build a high-performing, technically credible audit team.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience in IT/security internal audit, technology risk, or first-line security/engineering roles with significant controls exposure.</li>
</ul>
<ul>
<li>Experience working in a fast-paced, cloud-native, or engineering-driven environment where technology and security practices evolve rapidly.</li>
</ul>
<ul>
<li>Hands-on audit experience with cloud platforms (AWS, GCP), including IAM policies, security configurations, logging/monitoring, and CI/CD pipelines.</li>
</ul>
<ul>
<li>AI-forward mindset with demonstrated experience applying Python, SQL, or AI tools to audit or security work, building workflows rather than just prompting.</li>
</ul>
<ul>
<li>Relevant professional certifications (e.g., CISA, CISSP, CIA, CISM) required; CPA or CFE a plus.</li>
</ul>
<ul>
<li>Working knowledge of key frameworks such as NIST CSF, COBIT, SOC 2, and ITIL.</li>
</ul>
<ul>
<li>High EQ and collaborative style.</li>
</ul>
<ul>
<li>Proven ability to translate complex technical findings into clear, executive-ready narratives for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to manage multiple audits and initiatives across time zones (EMEA, APAC) with minimal oversight.</li>
</ul>
<ul>
<li>Demonstrated leadership and team-development experience, including mentoring, coaching, and managing direct reports.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience auditing or building blockchain infrastructure, crypto custody, or wallet systems (hot/cold storage).</li>
</ul>
<ul>
<li>Background in a high-growth or rapidly scaling environment with complex, evolving technology stacks.</li>
</ul>
<ul>
<li>Experience with GRC platforms (Workiva, Archer, AuditBoard) or building custom audit automation tooling.</li>
</ul>
<ul>
<li>Familiarity with DORA, MiCA, or crypto-specific regulatory frameworks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,345-$195,700 USD</Salaryrange>
      <Skills>IT security, Cloud infrastructure, Security operations, Identity and access management, Data protection, IT asset management, Vendor/third-party risk, Blockchain infrastructure, Centralized and self-hosted wallets, Cold storage, AI-enabled audit execution, Data analytics, Automation, Generative AI, Continuous monitoring, Anomaly detection, Automated evidence retrieval, AI-assisted workpaper drafting, Cloud platforms, IAM policies, Security configurations, Logging/monitoring, CI/CD pipelines, Python, SQL, AI tools, NIST CSF, COBIT, SOC 2, ITIL, CISA, CISSP, CIA, CISM, CPA, CFE</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7755116</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>262aa1cb-01c</externalid>
      <Title>Head of Corporate Engineering</Title>
      <Description><![CDATA[<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>
<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>
<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>
<p>Responsibilities:</p>
<p>In this influential role, you will be responsible for:</p>
<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>
<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>
<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>
<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>
<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>
<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>
<p>Desired Success Outcomes</p>
<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>
<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>
<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>
<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>
<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>
<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>
<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>
<p>Qualifications:</p>
<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>
<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>
<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>
<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>
<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>
<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>
<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $265,000-$364,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,000-$364,300 USD</Salaryrange>
      <Skills>Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7293607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da439e6e-91e</externalid>
      <Title>Senior Commercial Account Executive, Israel</Title>
      <Description><![CDATA[<p>About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About this Role The Senior Commercial Account Executive position effectively delivers the full sales cycle from prospecting to negotiating and closing sales with new &amp; existing customers in line with business plans. Identify and progress cross-sell opportunities to maximise revenue goals. Selling new products and generating additional sales revenue through effective sales outreach activity.</p>
<p>Main Responsibilities:</p>
<ul>
<li>Develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</li>
<li>Drive new business acquisition (new customer logos), customer expansion (upsell and cross-sell Cloudflare solutions), and renewal within your territory.</li>
<li>Build a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understand customer use-cases and how they pair with Cloudflare’s portfolio solutions in order to identify new sales opportunities.</li>
<li>Craft and communicate compelling value propositions for Cloudflare services. Drive awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Effectively scale the territory with partners - Accurately forecast commercial outcomes by running a consistent sales process, including driving next step expectations and contract negotiations.</li>
<li>As a trusted advisor, build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</li>
<li>Network across different business units with each of your accounts, and multi-thread to identify and engage new divisional buyers.</li>
<li>Position Cloudflare&#39;s platform in each of your target customers, including Cloudflare One and the Connectivity Cloud to realise our full potential in every customer.</li>
<li>Operate internally as a liaison with cross-functional teams to share key customer feedback and insights to improve customer experience and further investments with Cloudflare.</li>
</ul>
<p>Direct B2B sales experience, adept at new business acquisition and account management. Experience selling a technical, cloud-based product or service - Working knowledge of the cloud infrastructure and security space - Solid understanding of computer networking and Internet functioning. Keenness for learning technical concepts/terms. Technical background in engineering, computer science, or MIS is advantageous.</p>
<p>Knowledge/Experience:</p>
<ul>
<li>Fluency in Hebrew - 6+ years of B2B selling experience and selling Enterprise Software or SaaS (network security preferred) or Hardware solutions and services to Mid-Enterprise/ Enterprise customers - Relevant direct experience, track record, and relationships within enterprise and mid-market accounts in the territory - New Business &amp; Expansion - Experience managing longer, complex sales cycles - Fast paced environment - Enterprise IT/Cyber Security background - Aptitude for learning technical concepts/terms (Technical background in engineering, computer science, or MIS a plus)</li>
</ul>
<p>What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states. 1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>B2B sales experience, New business acquisition and account management, Technical, cloud-based product or service, Cloud infrastructure and security space, Computer networking and Internet functioning, Fluency in Hebrew, Enterprise Software or SaaS (network security preferred), Hardware solutions and services to Mid-Enterprise/ Enterprise customers, Technical background in engineering, computer science, or MIS</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7095765</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f05d190-fce</externalid>
      <Title>Sr. Manager, Field Engineering - Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>A key responsibility of this role is to hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers.</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle.</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships.</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team.</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning).</li>
</ul>
<ul>
<li>5+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment.</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets.</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business.</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts.</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, technical product, digital native customers, data, analytical, and AI workloads, Solutions Architects, customer-facing teams, hiring, onboarding, and supporting team members, high-growth environment, executive alignment, accounts that guide strategic decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8496009002</Applyto>
      <Location>Colorado; Remote - California; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e22b8bd1-f7a</externalid>
      <Title>Staff Product Manager, Serverless Workspaces</Title>
      <Description><![CDATA[<p>At Databricks, we are building the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. The Serverless Workspaces team is the engine behind Databricks&#39; shift from a &#39;configure-first&#39; to a &#39;use-now&#39; platform. We are redefining the customer onboarding experience by removing the heavy lifting of cloud infrastructure without complicated networking, storage, and cluster configuration, just instant access to data and AI.</p>
<p>You will own the strategy for this next-generation platform layer, balancing the simplicity of a SaaS experience with the control enterprise customers demand. The impact you will have:</p>
<ul>
<li>Drive the transition to Serverless: Lead the strategy to unify the journey to onboard to serverless and classic workspaces and drive 10X usage of serverless in the next year</li>
<li>Democratize Workspace Creation: Design and ship flows that allow users to spin up workspaces instantly with little friction while maintaining strict governance guardrails and company policies</li>
<li>Redefine the &#39;Getting Started&#39; experience: Lower the barrier to entry by removing the requirement for customers to manage detailed cloud infrastructure configurations before using Databricks but allowing them dial those in when they&#39;re ready</li>
<li>Solve &#39;Workspace Proliferation&#39;: Help define the tools and policies that allow Admins to confidently govern increased amounts of workspaces across the enterprise</li>
<li>Unify the Data Estate: Work closely with the Unity Catalog and Identity teams to ensure that these new serverless environments seamlessly integrate with a customer&#39;s existing data and security models</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience as a Product Manager working on cloud infrastructure, developer platforms, or SaaS foundations</li>
<li>Technical depth in Cloud Infrastructure: Familiarity with AWS, Azure, or GCP resource management (e.g. networking, compute, identity) and how to abstract that complexity for end-users</li>
<li>Passion for simplification: A track record of taking complex technical workflows (like configuring a VPC or peering) and turning them into &#39;one-click&#39; consumer-grade experiences</li>
<li>Data-driven mindset: Comfortable defining and tracking KPIs, such as &#39;Time to First Workspace&#39; or &#39;Serverless Adoption Rate,&#39; to measure success</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Cloud Infrastructure, Developer Platforms, SaaS Foundations, AWS, Azure, GCP, Networking, Compute, Identity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8420607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>193a44d6-056</externalid>
      <Title>Staff Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our Global Public Sector team. As a key member of our team, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>You will:</p>
<ul>
<li>Serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation</li>
<li>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes</li>
<li>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments</li>
<li>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases</li>
<li>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture</li>
<li>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Masters or Phd in Computer Science or equivalent deep industry experience in architecting complex, distributed systems</li>
<li>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP)</li>
<li>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability</li>
<li>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments</li>
<li>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization</li>
<li>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction</li>
<li>Proficient in Arabic</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, Machine learning, Distributed systems, Cloud computing, Security, Arabic, LLM-driven solutions, Startup experience, CTO or founding engineer experience, Cross-functional experience with operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676610005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>588dfb0e-611</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>
<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>
<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4557835006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c38cbb6f-4b7</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Inference\n\nLocation: Dublin, IE\n\nDepartment: Software Engineering - Infrastructure\n\nJob Description:\n\nAbout Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\n\nAbout the role:\n\nOur Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.\n\nThe team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.\n\nAs a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.\n\nStrong candidates may also have experience with:\n\n- High-performance, large-scale distributed systems\n\n- Implementing and deploying machine learning systems at scale\n\n- Load balancing, request routing, or traffic management systems\n\n- LLM inference optimization, batching, and caching strategies\n\n- Kubernetes and cloud infrastructure (AWS, GCP)\n\n- Python or Rust\n\nYou may be a good fit if you:\n\n- Have significant software engineering experience, particularly with distributed systems\n\n- Are results-oriented, with a bias towards flexibility and impact\n\n- Pick up slack, even if it goes outside your job description\n\n- Want to learn more about machine learning systems and infrastructure\n\n- Thrive in environments where technical excellence directly drives both business results and research breakthroughs\n\n- Care about the societal impacts of your work\n\nRepresentative projects across the org:\n\n- Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators\n\n- Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads\n\n- Building production-grade deployment pipelines for releasing new models to millions of users\n\n- Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage\n\n- Contributing to new inference features (e.g., structured sampling, prompt caching)\n\n- Supporting inference for new model architectures\n\n- Analyzing observability data to tune performance based on real-world production workloads\n\n- Managing multi-region deployments and geographic routing for global customers\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:€295.000-€355.000 EUR\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\n\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\n\nCome work with us!\n\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€295.000-€355.000 EUR</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5150472008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07b35bd1-4bf</externalid>
      <Title>Forward Deployed AI Engineering Manager, GenAI Applications</Title>
      <Description><![CDATA[<p>At Scale AI, we are not just building AI tools. We are pioneering the next era of enterprise AI.</p>
<p>As businesses rush to harness the potential of Generative AI, Scale is leading the way, transforming workflows, automating complex processes, and driving real-world impact for the world’s largest enterprises and government organizations.</p>
<p>Our Scale Generative AI Platform (SGP) powers production-grade GenAI applications with foundational services, APIs, and infrastructure that accelerate adoption across industries.</p>
<p>We are looking for a technical and strategic Engineering Manager to lead our European FDE team.</p>
<p>This is a high-ownership role at a pivotal moment. You will be responsible for delivering high-impact GenAI solutions in production, leading a team that works directly with customers, and ensuring we solve real problems with clarity, speed, and excellence.</p>
<p>Why this role is unique:</p>
<ul>
<li>Right place, right time: We are moving from prototypes to production at scale. Our FDE team is on the front lines of this transition, helping customers adopt AI faster and with more confidence.</li>
</ul>
<ul>
<li>Customer-first mindset: You will foster a culture of deep customer empathy and practical problem-solving. From scoping use cases to shipping solutions, your team will be responsible for every step of the delivery lifecycle.</li>
</ul>
<ul>
<li>Strategic influence: The lessons from forward-deployed efforts directly inform our core product roadmap. You will work closely with Product and Platform teams to identify patterns, prioritize improvements, and shape the evolution of SGP.</li>
</ul>
<ul>
<li>Operational excellence: You will bring structure to delivery, improve execution, and scale our engineering operations in a fast-moving environment.</li>
</ul>
<p>This is a rare opportunity to help define how the next generation of AI applications is built and deployed.</p>
<p>If you are excited by the pace of innovation in GenAI, passionate about solving real-world problems, and ready to lead a team that is redefining enterprise AI delivery, we want to hear from you.</p>
<p>At Scale, we do not just follow AI breakthroughs. We deliver them. Join us and be part of the team shaping the future of AI in the enterprise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering management, Generative AI, cloud infrastructure, DevOps, scalable platform architecture, strategic thinking, operational rigor, communication and collaboration skills, hands-on experience building or deploying AI-powered systems, model behavior shapes user experience, leadership presence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4589592005</Applyto>
      <Location>Berlin, Germany; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78ae8204-779</externalid>
      <Title>Senior Staff Software Engineer, Solana Staking Protocol</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Staff Software Engineer to serve as Coinbase&#39;s Solana Staking Protocol CTO , the definitive technical authority on all things Solana staking across the company.</p>
<p>This is not a typical engineering role. You will combine deep Solana protocol mastery with strategic technical leadership to shape Coinbase&#39;s Solana staking trajectory for years to come.</p>
<p>You will own the technical strategy across validator operations, staking integrations, and protocol evolution , partnering directly with engineering leadership, product teams, and external ecosystem players including the Solana Foundation.</p>
<p>You will represent Coinbase on the world stage as a recognized Solana expert, speaking at conferences, engaging with the validator community, and influencing protocol direction.</p>
<p>Internally, you will be the go-to expert for any Solana staking technical decision, from runtime-level optimizations to cross-product integration strategy.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Define Solana Staking Strategy</strong></p>
<p>Own and drive Coinbase&#39;s multi-year technical strategy for Solana staking across validator performance, protocol participation, and product integration.</p>
<p>Connect engineering decisions to business outcomes including yield optimization, cost efficiency, and customer growth.</p>
<p><strong>Maximize Validator Performance</strong></p>
<p>Lead the engineering effort to achieve industry-leading APY through validator optimization , including vote accuracy, block production, MEV strategies, commission tuning, and stake distribution.</p>
<p>Build systems and tooling that give Coinbase a durable performance edge.</p>
<p><strong>Own Protocol Expertise</strong></p>
<p>Serve as Coinbase&#39;s foremost authority on the Solana runtime, consensus mechanism, staking economics, and validator client landscape (Agave, Firedancer, etc.).</p>
<p>Evaluate protocol upgrades (e.g., SIMD proposals), assess risks, and proactively position Coinbase for changes before they land.</p>
<p><strong>Drive Cross-Product Integration</strong></p>
<p>Partner with Retail Staking and Institutional Staking product and engineering teams to architect scalable staking integrations across Coinbase&#39;s product surface area.</p>
<p>Ensure Solana staking is deeply embedded and differentiated in every Coinbase staking product.</p>
<p><strong>Build External Presence &amp; Influence</strong></p>
<p>Represent Coinbase in the Solana ecosystem.</p>
<p>Maintain deep relationships with the Solana Foundation, core development teams, other major validators, and ecosystem partners.</p>
<p>Speak at major conferences (Breakpoint, etc.) and contribute to protocol governance.</p>
<p>Be Coinbase&#39;s voice on Solana staking.</p>
<p><strong>Lead Technical Execution</strong></p>
<p>Write production code.</p>
<p>Design and build critical infrastructure for validator operations, monitoring, automation, and reliability.</p>
<p>Set the technical bar for the team , code reviews, architecture decisions, incident response.</p>
<p><strong>Expand Beyond Staking</strong></p>
<p>Serve as a technical advisor on non-staking Solana initiatives where deep protocol knowledge is required (e.g., Solana tax infrastructure, token programs, new Solana-based products).</p>
<p><strong>Mentor and Scale the Team</strong></p>
<p>Elevate a team of strong engineers (IC4-IC5) through mentorship, architectural guidance, and raising the bar on Solana-specific domain expertise.</p>
<p>Define what great Solana engineering looks like at Coinbase.</p>
<p><strong>Requirements</strong></p>
<p><strong>Deep Solana Protocol Expertise</strong></p>
<p>You have extensive, hands-on experience with Solana&#39;s architecture , Eg: the runtime, validator mechanics, staking economics, consensus (Tower BFT), turbine, Gulf Stream, and the validator client ecosystem.</p>
<p>You understand Solana at the source-code level, not just the API level.</p>
<p><strong>Technical Authority &amp; Execution</strong></p>
<p>You are a strong IC7-caliber engineer.</p>
<p>You design and build complex distributed systems.</p>
<p>You write production code in Rust and/or Go.</p>
<p>You have deep experience with infrastructure at scale , bare metal, cloud, networking, observability.</p>
<p><strong>Strategic Vision</strong></p>
<p>You can define year-long technical strategies and connect them to business goals.</p>
<p>You break down ambiguous, large-scope problems into executable plans with measurable milestones.</p>
<p>You think in terms of competitive advantage, not just engineering correctness.</p>
<p><strong>Ecosystem Presence &amp; Influence</strong></p>
<p>You are a known figure in the Solana ecosystem.</p>
<p>You have existing relationships with the Solana Foundation, core contributor teams, and major validators.</p>
<p>You have a track record of public speaking, community engagement, or protocol governance participation.</p>
<p><strong>Cross-Functional Leadership</strong></p>
<p>You partner effectively with product, business, and executive stakeholders.</p>
<p>You translate complex protocol dynamics into business-relevant terms for non-technical audiences.</p>
<p>You drive alignment across multiple teams and functions.</p>
<p><strong>Passion for Solana</strong></p>
<p>This isn&#39;t a role for a generalist who happens to know some Solana.</p>
<p>You are genuinely passionate about the Solana ecosystem, follow protocol developments closely, and have a strong thesis on where Solana staking is headed.</p>
<p><strong>Ability to Responsibly Use Generative AI Tools</strong></p>
<p>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>
<p><strong>Nice to Have</strong></p>
<p><strong>Core Contributor to Solana Validator Clients</strong></p>
<p>Core contributor to Solana validator clients (Agave, Firedancer) or significant Solana ecosystem projects.</p>
<p><strong>Experience Operating in Highly Regulated Industries</strong></p>
<p>Experience operating in highly regulated industries or security-first cultures.</p>
<p><strong>Background in Financial Services</strong></p>
<p>Background in financial services, fintech, or crypto custody.</p>
<p><strong>Track Record of Publishing Technical Content</strong></p>
<p>Track record of publishing technical content (blog posts, research, conference talks) on Solana or Blockchain in general.</p>
<p><strong>Experience with Solana&#39;s Evolving Staking Landscape</strong></p>
<p>Experience with Solana&#39;s evolving staking landscape , liquid staking, stake pools, restaking protocols.</p>
<p><strong>Familiarity with Other PoS Protocol Staking Operations</strong></p>
<p>Familiarity with other PoS protocol staking operations (Ethereum, Cosmos ecosystem) for comparative perspective.</p>
<p><strong>Pay Transparency Notice</strong></p>
<p>Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Solana, Rust, Go, Distributed Systems, Cloud Infrastructure, Networking, Observability, Validator Operations, Staking Integrations, Protocol Evolution, Cross-Product Integration, Technical Leadership, Strategic Vision, Competitive Advantage, Business Goals, Executable Plans, Milestones, Alignment, Multiple Teams, Functions, Passion for Solana, Generative AI Tools, Copilots, Human-in-the-Loop Practices, Efficiency, Cost, Quality</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider that operates globally.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7684298</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86363ae6-10f</externalid>
      <Title>Manager, Field Engineering - Strategic Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the largest, fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector</li>
</ul>
<p>What we look for:</p>
<ul>
<li>4+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning)</li>
</ul>
<ul>
<li>3+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $172,500-$237,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$172,500-$237,150 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, data analysis, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8458032002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32c0c69a-037</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Work end to end on identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research</li>
<li>Collaborate with the team to design and implement solutions to complex problems</li>
<li>Develop and maintain large-scale distributed systems</li>
<li>Implement and deploy machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p><strong>Application Instructions:</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€295.000-€355.000 EUR</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5150472008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe04c8cc-782</externalid>
      <Title>Forward Deployed Engineering Manager</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>
<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>
<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>
<p>Why Join Us</p>
<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>
<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>
<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>
<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>
<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>
<p>The role</p>
<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>
<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>
<p>What You’ll Do</p>
<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>
<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>
<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>
<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>
<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>
<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>
<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>
<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>
<p>What We’re Looking For</p>
<p>5+ years of software engineering experience (Python)</p>
<p>2+ years of experience managing or leading engineers in fast-paced environments</p>
<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>
<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>
<p>Background in infrastructure, developer tooling, or distributed systems</p>
<p>Strong debugging skills and systems thinking across layered, containerized environments</p>
<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>
<p>Excellent communication and stakeholder management skills</p>
<p>Preferred</p>
<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>
<p>Familiarity with cloud infrastructure (GCP or AWS)</p>
<p>Prior experience in AI/ML platforms, data companies, or research environments</p>
<p>Contributions to open-source projects in RL, agents, or developer tooling</p>
<p>Why This Role Matters</p>
<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>
<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>
<p>About Alignerr</p>
<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>
<p>Life at Labelbox</p>
<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>
<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>
<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>
<p>Growth: Career advancement opportunities directly tied to your impact</p>
<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>
<p>Our Vision</p>
<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>
<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>
<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$220,000 USD</Salaryrange>
      <Skills>Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a data-centric AI development company that provides critical infrastructure for breakthrough AI models.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5101195007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e948a283-667</externalid>
      <Title>Staff Software Engineer, Platform Security</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Platform Security Engineering team. As a key member of this team, you will be responsible for advancing our mission through security expertise, software development, and operational excellence.</p>
<p>In this technical leadership role, you will articulate and pursue the most leveraged opportunities to reduce security risk across Engineering, designing and building lovable &#39;paved paths&#39; for managing identities and access, shipping code, configuring cloud infrastructure, and operating services.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and applying best-in-class secure baselines for cloud infrastructure</li>
<li>Securing first- and third-party software supply chains, from the dev environment through CI/CD and into production</li>
<li>Building and owning identity and access management (IAM) systems that are user-friendly and promote least privilege</li>
<li>Managing infrastructure vulnerabilities while supporting rapid growth for Engineering</li>
<li>Consulting on risk assessments, architectural designs, threat models, code reviews, and more,pragmatically balancing security with other business considerations</li>
</ul>
<p>Example projects include:</p>
<ul>
<li>Supporting IAM with scalable platform solutions</li>
<li>Building tooling to prevent and address vulnerabilities across our infrastructure</li>
<li>Integrating service-to-service authentication and authorization into Discord&#39;s internal developer platform</li>
</ul>
<p>What we look for in a candidate includes:</p>
<ul>
<li>5+ years of experience building and operating production systems or infrastructure</li>
<li>5+ years of experience writing software in a general-purpose programming language</li>
<li>4+ years of experience securing systems with millions of users</li>
<li>Experience mentoring junior ICs and leading technical projects involving multiple engineers and spanning multiple quarters</li>
<li>Experience designing and building software for customers (internal or external) beyond your immediate team</li>
<li>Experience securing cloud environments</li>
<li>Experience defining and orchestrating containers</li>
<li>Familiarity with build and CI/CD technologies</li>
<li>Understanding of modern authentication and authorization concepts</li>
</ul>
<p>Bonus points if you have experience developing and debugging distributed systems atop GCP and Cloudflare, leading complex migrations or risk management programs across an engineering organization, or managing and securing VMs or bare-metal hosts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$248,000 to $279,000 + equity + benefits</Salaryrange>
      <Skills>cloud infrastructure, identity and access management, software development, operational excellence, security expertise, container orchestration, build and CI/CD technologies, modern authentication and authorization concepts, distributed systems, GCP and Cloudflare</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8177912002</Applyto>
      <Location>San Francisco Bay Area or Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd99a123-cbf</externalid>
      <Title>Security Software Engineer - Crypto Services</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Security Software Engineer with a specialization in crypto services and key management to develop novel security tooling for securing our suite of products. The ideal candidate can develop, test, and debug embedded software with mission-critical security responsibilities.</p>
<p>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems.</p>
<p>Design and develop resilient software supporting all phases of key handling on embedded systems - from key load through sanitization.</p>
<p>Develop thorough testing and qualification procedures for security-critical components.</p>
<p>Collaborate with cross-functional teams to identify specific security needs and implement solutions.</p>
<p>Conduct code reviews and ensure adherence to security best practices.</p>
<p>Stay updated on the latest security threats and technologies.</p>
<p>The salary range for this role is $146,000-$220,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$220,000 USD</Salaryrange>
      <Skills>Golang, Rust, C/C++, Embedded HSMs and Secure Elements, CI/CD and test automation, Debugging embedded systems, Security frameworks and compliance standards, Mobile development, Cloud infrastructure management, US Government key handling requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops novel security tooling for securing its suite of products.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5086896007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>56a72069-e42</externalid>
      <Title>Staff+ Software Engineer, Backend</Title>
      <Description><![CDATA[<p><strong>About the Role\nAnthropic is looking for experienced, product-minded engineers to own the backend systems that power user experiences across our API, Claude Code, and Claude.ai.\n\nYou&#39;ll independently scope complex, multi-month projects through ambiguous problem spaces and lead peers through technical and product decisions; you&#39;ll drive alignment with product, peer engineering teams, and research to identify capability gaps and translate frontier model improvements into shipped products.\n\nYou&#39;ll make architectural decisions that affect the reliability and scalability of systems serving hundreds of thousands of global users (including internal teams), and design processes that help your team operate effectively and never fail the same way twice - all while staying hands-on with the code and our models.\n\n## Responsibilities\n### API Core\nYou&#39;ll build and scale the foundation of the Claude API,the systems that deliver Claude&#39;s intelligence to every developer, from startups to enterprise.\n\nYou&#39;ll own the performance, reliability, and efficiency of our core serving path, ensuring users get the most speed and value from our models.\n\nYou&#39;ll partner closely with inference and safeguards to optimise the full stack.\n\n### API Capabilities\nYou&#39;ll bring frontier model capabilities to developers through the Claude API, owning core features like vision, tool use, and computer use.\n\nYou&#39;ll launch new models and ship the primitives that make Claude more capable with every release.\n\nYou&#39;ll partner directly with research and inference to productionise what&#39;s next.\n\n### API Knowledge\nYou&#39;ll focus on transforming Claude into a true knowledge worker by ensuring the model has access to and understanding of the right knowledge at the right time.\n\nYou&#39;ll work on making it possible for developers to securely give Claude access to their data while automatically processing and retrieving relevant information.\n\nYou&#39;ll partner directly with research to bring state-of-the-art retrieval advancements to developers.\n\n### Developer Experience\nYou’ll focus on building products and tools to enable developers to harness the full power of LLMs to create successful, reliable, and groundbreaking applications with ease.\n\nYou’ll build the tools to accelerate developers from idea to deployment.\n\nYou&#39;ll help figure out how to leverage Claude to improve developer&#39;s usage of the API, such as generating and evaluating prompts while collaborating closely with the teams above to bring Claude&#39;s current and future capabilities to developers.\n\n### API Agents\nYou&#39;ll focus on building the infrastructure and APIs that enable developers to create powerful agentic applications within the Claude API.\n\nYou&#39;ll help developers with agent orchestration through capabilities like tool use, multi-step reasoning, and long-running task execution that allow Claude to take actions and accomplish complex goals on behalf of users.\n\nYou&#39;ll partner with research to bring cutting-edge agent capabilities to production.\n\n### Enterprise Foundations\nWe&#39;re looking for a software engineer to join our Enterprise Foundations team,the team that makes Claude enterprise-ready at scale.\n\nWhen a Fortune 500 company wants to roll out Claude to 100,000 employees, we&#39;re the team that makes it possible.\n\nYou&#39;ll build the foundational systems that large organisations require before they can deploy AI at scale: user and permissions management, security and compliance features, and analytics infrastructure.\n\nThis work directly converts product-market fit into revenue by removing the deployment blockers that prevent large organisations from adopting Claude broadly.\n\n## Requirements\n\n<em>   Have 8+ years of relevant experience as a backend or product engineer, with a track record of leading complex, multi-month projects or teams as a tech lead or equivalent\n\n</em>   Have strong coding fundamentals and are comfortable working across backend systems, APIs, and integrations , and can reach into the frontend when needed to ship an effective solution\n\n<em>   Have led the design and delivery of large-scale backend systems in production that power high-adoption B2B or consumer-facing products\n\n</em>   Are skilled at driving alignment across technical and non-technical teams; you communicate clearly, influence technical decisions beyond your immediate team, and help others ramp effectively on your systems\n\n<em>   Take a product-focused approach to your work and care about building solutions that are robust, scalable, and easy to use\n\n</em>   Care deeply about investing in the mentorship and growth of your peers\n\n<em>   Have experience with distributed systems, API design, and cloud infrastructure at scale\n\n</em>   Thrive in fast-paced environments and can navigate ambiguity to find the highest-leverage path forward\n\n## Preferred Qualifications\n\n<em>   Served as a technical lead or architect on a product or API platform, owning both the technical vision and execution end-to-end\n\n</em>   Experience designing and scaling APIs with a focus on developer experience, consistency, and reliability , including API design review processes\n\n<em>   Deep experience building enterprise SaaS platforms, including permissions infrastructure, billing and pricing systems, or compliance frameworks for regulated industries (SO2, HIPAA)\n\n</em>   Background in a specific industry vertical , financial services, healthcare, or legal technology , with a track record of building products that handle sensitive, domain-specific data\n\n<em>   Experience partnering with ML/AI research teams to productise model capabilities or identify and address model failure modes in production\n\n</em>   Experience building agentic systems, orchestration frameworks, or developer tools , including CLI tools, IDE integrations, or AI-assisted coding environments\n\n<em>   Experience building products where adoption and activation are core challenges , instrumenting funnels, diagnosing drop-off, and shipping the product changes that close gaps\n\n</em>   Experience designing operational processes (incident response, on-call rotations, postmortem review) for production systems serving large-scale developer or enterprise audiences\n\n## Salary\n\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $405,000-$485,000 USD</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>backend, product engineer, API design, cloud infrastructure, distributed systems, API design review processes, permissions infrastructure, billing and pricing systems, compliance frameworks, regulated industries, HIPAA, SO2, ML/AI research teams, model capabilities, model failure modes, agentic systems, orchestration frameworks, developer tools, CLI tools, IDE integrations, AI-assisted coding environments, adoption and activation, funnel instrumentation, drop-off diagnosis, product changes, operational processes, incident response, on-call rotations, postmortem review, technical lead, architect, product platform, API platform, technical vision, execution end-to-end, developer experience, consistency, reliability, enterprise SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174755008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d4292d1-227</externalid>
      <Title>Software Engineer, Sandboxing (Systems)</Title>
      <Description><![CDATA[<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>
<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>
<p>Responsibilities:</p>
<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>
<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>
<p>Investigate and resolve performance bottlenecks in virtualized environments</p>
<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>
<p>Develop tooling for monitoring and improving virtualization performance</p>
<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>
<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>
<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>
<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>
<p>You may be a good fit if you:</p>
<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>
<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>
<p>Have experience optimizing system performance for compute-intensive workloads</p>
<p>Are familiar with modern CPU architectures and memory systems</p>
<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>
<p>Understand Linux resource management, scheduling, and memory management</p>
<p>Have experience profiling and debugging system-level performance issues</p>
<p>Are comfortable diving into unfamiliar codebases and technical domains</p>
<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>
<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>
<p>Strong candidates may also have experience with:</p>
<p>GPU virtualization and acceleration technologies</p>
<p>Cloud infrastructure at scale (AWS, GCP)</p>
<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>
<p>eBPF programming and kernel tracing tools</p>
<p>OS-level security hardening and isolation techniques</p>
<p>Developing custom scheduling algorithms for specialized workloads</p>
<p>Performance optimization for ML/AI specific workloads</p>
<p>Network stack optimization and high-performance networking</p>
<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>
<p>Representative projects:</p>
<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>
<p>Implementing custom memory management schemes for large-scale distributed training</p>
<p>Developing specialized I/O schedulers to prioritize ML workloads</p>
<p>Creating lightweight virtualization solutions tailored for AI inference</p>
<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>
<p>Enhancing communication between VMs for distributed training workloads</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025591008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f3a04da-d45</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are looking for software engineers to join our Platform organisation. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and optimise the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines.</li>
<li>Partner closely with product teams to understand their development workflow and eliminate friction points.</li>
<li>Work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don&#39;t.</li>
</ul>
<p><strong>Platforms</strong></p>
<ul>
<li>Platform Acceleration: We work on maximising the developer productivity of product engineers at Anthropic.</li>
<li>Service Infra: We build and maintain the core infrastructure that powers Anthropic&#39;s engineering organisation, from service mesh and observability systems to deployment pipelines and shared libraries.</li>
<li>Multicloud: We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers.</li>
<li>Auth &amp; Identity: We build and maintain the critical infrastructure that powers identity and authentication across Anthropic&#39;s product suite.</li>
<li>Connectivity: Our mission is to make Claude the most connected AI.</li>
<li>API Distributability: The Claude API today is a rapidly growing platform serving developers and enterprises at scale.</li>
<li>Platform Intelligence: We build the training systems that adapt Claude to specific customer workloads.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have a minimum of 5 years of practical experience building backend product or platform systems,distributed systems, cloud-native products, developer tools, or external developer facing products.</li>
<li>Have strong fundamentals in service-oriented architectures, networking, and systems design.</li>
<li>Are proficient in Python, Go, Rust, or similar systems languages.</li>
<li>Have experience with cloud infrastructure (GCP, AWS, or Azure), container orchestration (Kubernetes), and/or multi-cloud networking.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual compensation range: $320,000-\$320,000 USD.</li>
<li>Visa sponsorship available.</li>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-\$320,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, Cloud infrastructure, Container orchestration, Multi-cloud networking, Service-oriented architectures, Networking, Systems design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157844008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2637f59-e14</externalid>
      <Title>Full-Stack Software Engineer, Reinforcement Learning</Title>
      <Description><![CDATA[<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day.\n\nYou don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast. This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next.\n\nYou&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.\n\nAnthropic&#39;s Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We&#39;ve contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models.\n\nOur work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.\n\nThe RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.\n\nOur engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.\n\n### Responsibilities\n\n<em>   Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows\n</em>   Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction\n<em>   Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early\n</em>   Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking\n<em>   Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure\n</em>   Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels\n<em>   Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks\n</em>   Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products\n\n### Requirements\n\n<em>   Strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend\n</em>   Proficient in Python and a modern web stack (React, TypeScript, or similar)\n<em>   Track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible\n</em>   Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket\n<em>   Found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it\n</em>   Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers\n<em>   Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work\n</em>   Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before\n<em>   Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it\n\n### Nice to Have\n\n</em>   Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types\n<em>   Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows\n</em>   Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines\n<em>   Familiarity with LLM training, fine-tuning, or evaluation workflows\n</em>   Experience with async Python (Trio, asyncio) or high-throughput API design\n<em>   Background in dashboards, monitoring, or observability tooling\n</em>   Experience working directly with external vendors or partners on technical integrations\n<em>   A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope\n\n### Representative Projects\n\n</em>   Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks\n<em>   Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation\n</em>   Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training\n<em>   Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments\n</em>   Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training\n*   Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Modern web stack, React, TypeScript, Strong software engineering fundamentals, Full-stack range, Database schema, Frontend, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation platforms, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company working on developing artificial intelligence systems. It has a quickly growing team of researchers, engineers, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186067008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9be280f4-cbc</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>
<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>
<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>
<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f6ef3b1-c9b</externalid>
      <Title>Technical Program Manager, Compute</Title>
      <Description><![CDATA[<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>
<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>
<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>
<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>
<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>
<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>
<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>
<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>
<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>
<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>
<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>
<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>
<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>
<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>
<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>
<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>
<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>
<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>
<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>
<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Cloud Infrastructure, Cluster Management, Job Scheduling, Resource Orchestration, Compute Capacity Management, GPU or Accelerator Infrastructure, Observability for Infrastructure Systems, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5138044008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc54ed6c-ca0</externalid>
      <Title>Full-Stack Engineer, Core Services (Senior Level)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Full-Stack Engineer to join our Core Services team. As a senior-level engineer, you&#39;ll design, build, and optimise the core systems and management platforms that power the Instabase platform.</p>
<p>This is a high-impact role for a &#39;product-minded engineer&#39;. In our Core Services team, we treat our platform as a product. Because we operate with a lean team, you will have end-to-end ownership: from writing Product Requirement Documents (PRDs) and building the high-performance backend services and scalable infrastructure that support them.</p>
<p>Responsibilities:</p>
<ul>
<li>Full Stack Development: You will function as a product-minded engineer for our internal platform. This involves architecting secure infrastructure (Kubernetes, Docker) and backend services (Go, Python, PostgresDB), while also building the frontend interfaces (React, TS) to support features.</li>
</ul>
<ul>
<li>Developer Experience: Create the internal platforms and dashboards that improve developer velocity, reliability, and observability across the entire organisation.</li>
</ul>
<ul>
<li>Technical Leadership: Act as a technical leader who mentors junior engineers, contributes to the entire infrastructure codebase, and identifies root causes for critical system issues.</li>
</ul>
<p>About you:</p>
<ul>
<li>Education: BS, MS, or PhD in Computer Science, or equivalent experience in a technical field such as Physics or Mathematics.</li>
</ul>
<ul>
<li>Experience: 5+ years of professional software development experience with a strong foundation in CS fundamentals.</li>
</ul>
<ul>
<li>Backend Expertise: Proficiency in Go and Python, with a deep understanding of building scalable backend services and APIs.</li>
</ul>
<ul>
<li>Frontend Expertise: Strong experience with React, TypeScript, and JavaScript for building complex, data-rich web applications.</li>
</ul>
<ul>
<li>Infrastructure &amp; Orchestration: Proficiency with Docker, Kubernetes, and cloud infrastructure (AWS, GCP, or Azure).</li>
</ul>
<ul>
<li>Product Thinking &amp; UI Design: You are comfortable functioning as your own PM and Designer and write technical specs (PRDs) to define how users interact with infrastructure.</li>
</ul>
<ul>
<li>Communication: Excellent communication skills to represent technical and product decisions to the wider engineering team.</li>
</ul>
<p>Good to have:</p>
<ul>
<li>Experience with React Native for mobile or cross-platform applications.</li>
</ul>
<ul>
<li>Prior experience in a startup environment where you handled multi-functional responsibilities (Dev, PM, and Design).</li>
</ul>
<p>Compensation: The base salary range for this role is $190,000 to $205,000 + bonus, equity and US benefits.</p>
<p>US Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
</ul>
<ul>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
</ul>
<ul>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
</ul>
<ul>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
</ul>
<ul>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
</ul>
<ul>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
</ul>
<ul>
<li>Lunch on Us: Enjoy a lunch credit when you&#39;re in the office.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Instabase is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $205,000 + bonus, equity and US benefits</Salaryrange>
      <Skills>Go, Python, PostgresDB, Kubernetes, Docker, React, TypeScript, JavaScript, Cloud infrastructure (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase provides a platform for organisations to solve unstructured data problems using AI.
It has customers representing large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8186577002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b0e99a49-d99</externalid>
      <Title>Senior Engineering Manager - Infrastructure</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for an Infrastructure Senior Engineering Manager to help us build a seamless, reliable platform for the dbt platform across AWS, Azure, and GCP.</p>
<p>Our team&#39;s mission is to create a seamless developer experience by providing a stable, observable, and easy-to-use infrastructure platform. Over the past year, we&#39;ve designed and operationalized a next-gen cell-based architecture, scaling the dbt platform across all three cloud providers. Now, we&#39;re focused on automation, self-service, and improving developer velocity through better tooling, processes, and infrastructure design.</p>
<p>As a Senior Engineering Manager, you&#39;ll lead your team on infrastructure projects to refine our platform while ensuring performance, reliability, and an excellent developer experience. You&#39;ll collaborate across teams, tackle real infrastructure challenges, and help shape the future of the multi-cloud dbt platform.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, lead, and coach a team of 8-12 engineers to manage the infrastructure for the dbt platform and report to the Director of Infrastructure</li>
<li>Empower your team to achieve big goals by giving them product and business context and supporting team ownership of the roadmap, product development lifecycle, and technical excellence</li>
<li>Dive deep into our product to frame tradeoffs and make decisions about what, how, and when we build</li>
<li>Partner with Product Marketing, Solutions Architecture, and Customer Support to build delightful migration experiences, helping our customers seamlessly move off legacy deployments</li>
<li>Coach engineers in product thinking, quality, and software engineering. Build individualized growth plans and match interests and capabilities to team goals</li>
<li>Work with peer managers to evolve organizational processes like product training, technical decision making, project execution, and planning</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years in people management with a software or infrastructure engineering team</li>
<li>Experience managing senior individual contributors (Staff+ level)</li>
<li>Experience supporting a cloud-based infrastructure with complex resource requirements and global deployment strategy</li>
<li>Deep understanding of Terraform and cloud infrastructure state management</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience leading teams through all parts of the product development lifecycle</li>
<li>Have successfully partnered across teams and departments to coordinate cross-cutting initiatives</li>
<li>You are interested in our mission and values. You are inspired to drive progress in the data and analytics ecosystem</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<p>The typical starting salary range for this role is: $223,000 - $270,000 USD</p>
<p>The typical starting salary range for this role in the select locations listed is: $248,000 - $300,000 US</p>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<p><strong>Our Hiring Process</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
<li>Team Interviews ( 3 rounds, 45 Mins each)</li>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>If you’re passionate about building well-designed, high-impact software, we’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$223,000 - $270,000 USD</Salaryrange>
      <Skills>Terraform, Cloud infrastructure state management, People management, Software engineering, Infrastructure engineering, Product development lifecycle, Technical decision making, Project execution, Process improvement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, used by over 90,000 teams every week, with over $100 million in annual recurring revenue.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4686309005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>065005a6-23e</externalid>
      <Title>OIC Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p>Long-term, you will be the subject matter expert responsible for architecting and scaling our enterprise integration landscape. This is a high-impact role for someone who thrives on solving complex data challenges and wants to build the operational foundation that enables Anduril to scale its mission.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Stabilize &amp; Optimize: Dive deep into existing Oracle Fusion ERP integrations across manufacturing, supply chain, finance, and engineering systems. Diagnose root causes of instability, re-architect weak points, and implement robust error handling and monitoring to achieve mission-critical reliability.</li>
</ul>
<ul>
<li>Architect &amp; Build: Design and develop new, scalable enterprise integrations using Oracle Integration Cloud (OIC). Translate complex business requirements for product data, multi-level Bills of Material (BOMs), procurement, inventory, work orders, and financial transactions into resilient and efficient integration flows.</li>
</ul>
<ul>
<li>Own the Integration Lifecycle: Manage the end-to-end process from design and development through testing (unit, SIT, UAT) and deployment, utilizing CI/CD best practices. Proactively tune and maintain integrations to ensure peak performance as data volumes grow.</li>
</ul>
<ul>
<li>Ensure Data Integrity: Become the trusted expert on data transformation and mapping between systems. Implement rigorous validation and reconciliation logic to guarantee that our enterprise data is flawless across all systems.</li>
</ul>
<ul>
<li>Collaborate &amp; Influence: Act as the key technical partner to our ERP, Manufacturing, Supply Chain, and Finance teams. Clearly articulate technical designs, trade-offs, and progress to both engineering peers and business stakeholders, guiding them toward best-practice integration patterns.</li>
</ul>
<ul>
<li>Leverage Modern Oracle Cloud Tools: Utilize Oracle Visual Builder Cloud Service (VBCS) where appropriate to build lightweight user interfaces that enhance integration workflows, data validation, or operational dashboards.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years of hands-on experience developing complex integrations with deep specialization in Oracle Integration Cloud (OIC), specifically Oracle Integration 3.</li>
</ul>
<ul>
<li>Proven experience integrating Oracle Fusion Cloud ERP with upstream and downstream enterprise systems (e.g., PLM, MES, WMS, CRM, third-party applications), including deep familiarity with ERP data objects such as Items, BOMs, Suppliers, Purchase Orders, Work Orders, Inventory Transactions, and Financial data.</li>
</ul>
<ul>
<li>Expert-level proficiency in OIC 3 components: Application and Tech Adapters (REST, SOAP, File, FTP, Oracle SaaS, Database), Connections, Mappings, Lookups, Error Handling, and JavaScript.</li>
</ul>
<ul>
<li>Strong command of XSLT, XPath, and complex data mapping for transforming large and nested XML/JSON payloads.</li>
</ul>
<ul>
<li>Demonstrable experience building, securing, and consuming RESTful APIs and SOAP web services.</li>
</ul>
<ul>
<li>Excellent SQL skills and a solid understanding of relational database concepts.</li>
</ul>
<ul>
<li>Experience with Oracle Fusion ERP modules such as SCM (Supply Chain Management), Manufacturing, Procurement, or Financials.</li>
</ul>
<ul>
<li>A tenacious problem-solver with a track record of troubleshooting, debugging, and stabilizing complex, business-critical systems.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Hands-on experience with Oracle Visual Builder Cloud Service (VBCS) for building user interfaces and extensions.</li>
</ul>
<ul>
<li>Experience with Oracle Business Intelligence Cloud Connector (BICC) for high-volume data extraction from Fusion ERP.</li>
</ul>
<ul>
<li>Experience with Oracle Cloud Infrastructure (OCI) services (e.g., Functions, API Gateway, Object Storage, Logging, Autonomous Database).</li>
</ul>
<ul>
<li>Experience integrating PLM systems (e.g., Teamcenter, Windchill, Arena) with Oracle Fusion ERP.</li>
</ul>
<ul>
<li>Familiarity with Git-based source control and CI/CD pipelines for integration deployments.</li>
</ul>
<ul>
<li>Experience in a discrete manufacturing environment.</li>
</ul>
<ul>
<li>Knowledge of other programming languages (e.g., Python, Groovy, Java).</li>
</ul>
<ul>
<li>Relevant Oracle Cloud Certifications (e.g., OIC 3 Application Integration Professional, Oracle Fusion Cloud certifications).</li>
</ul>
<p><strong>Salary Range:</strong> $129,000-$171,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud, Oracle Fusion ERP, APIs, SQL, XSLT, XPath, RESTful APIs, SOAP web services, JavaScript, CI/CD pipelines, Git-based source control, Oracle Visual Builder Cloud Service, Oracle Business Intelligence Cloud Connector, Oracle Cloud Infrastructure, PLM systems, Python, Groovy, Java, Oracle Cloud Certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that specialises in transforming U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5061434007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5bd8822f-5d3</externalid>
      <Title>Corporate Development Associate</Title>
      <Description><![CDATA[<p>We are seeking a high-performing and passionate Associate to join our corporate development team. The Associate will support the Corporate Development team, CDO, CSO and CFO in leading the corporate development activities of CoreWeave.</p>
<p>This includes managing M&amp;A processes, analysing industry trends, assessing competitive landscapes, identifying investment opportunities, and supporting marquee fundraising initiatives. This person will work cross-functionally with a variety of stakeholders at all levels of CoreWeave and have frequent opportunities to interact with and support key executive-level decision makers.</p>
<p>Optimally, this person will have previous experience with generative AI, technology, digital infrastructure, cloud infrastructure, data centres or similar verticals.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Identify and evaluate M&amp;A and direct investment opportunities based on industry knowledge, market situation, and financial analysis</li>
</ul>
<ul>
<li>Develop, maintain, and analyse complex financial models to support M&amp;A transactions, strategic investments, and other strategic initiatives</li>
</ul>
<ul>
<li>Assist in all phases of transaction execution, including due diligence, valuation, documentation, and integration planning</li>
</ul>
<ul>
<li>Collaborate with internal stakeholders, including finance, legal, operations, and senior executives to ensure alignment and successful execution of deals</li>
</ul>
<ul>
<li>Support Strategic Finance, Investor Relations, Treasury, and FP&amp;A on cross-functional Ad Hoc finance projects</li>
</ul>
<ul>
<li>Assist in the preparation of company management presentations that deliver CoreWeave&#39;s investment thesis and growth strategy to external stakeholders</li>
</ul>
<ul>
<li>Conduct comprehensive due diligence on M&amp;A and investment targets, including financial, and operational analysis</li>
</ul>
<ul>
<li>Monitor industry trends, competitive landscape, and market dynamics to identify opportunities and threats</li>
</ul>
<ul>
<li>Collaborate with CDO, CSO and CFO to assist with highly impactful, complex, and visible projects, including scaled, complex equity and debt fundraising initiatives</li>
</ul>
<ul>
<li>Complete special strategic projects and ad hoc modelling for senior executives as needed, such as projects regarding international expansion and JV partnerships</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>A bachelor&#39;s degree in finance, accounting, applied mathematics, economics, engineering, or an equivalent combination of education and experience</li>
</ul>
<ul>
<li>Minimum of 2+ year(s) of experience in investment banking, private equity, private credit or similar roles</li>
</ul>
<ul>
<li>Advanced analytical skills with an ability to perform quantitative and qualitative analysis on new ideas and concepts</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Excellent financial modelling and valuation skills, with a demonstrated track record of executing complicated financial analyses</li>
</ul>
<ul>
<li>Effective verbal and written communication skills, with a preference for candidates that have demonstrably interacted with management or other executive-level stakeholders</li>
</ul>
<ul>
<li>High level of self-sufficiency with proven success at self-teaching and a high intellectual motor</li>
</ul>
<ul>
<li>Strong analytical, quantitative, and problem-solving skills</li>
</ul>
<ul>
<li>Exceptional attention to detail, organisational skills, and ability to manage multiple competing priorities simultaneously</li>
</ul>
<ul>
<li>Advanced proficiency with Microsoft Office Suite, particularly Excel and PowerPoint</li>
</ul>
<ul>
<li>Understanding of M&amp;A processes for both public and private transactions, including deal sourcing/structuring, due diligence, and execution, with a proven track record of contributing to closed deals</li>
</ul>
<ul>
<li>Experience with modelling debt transactions (e.g., leveraged buyout models and private/public credit) preferred</li>
</ul>
<p>Wondering if you&#39;re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we&#39;ve found compatible with our team. If some of this describes you, we&#39;d love to talk.</p>
<ul>
<li>You love digging into complex financial problems and solving them with precision.</li>
</ul>
<ul>
<li>You&#39;re curious about how AI, cloud, and digital infrastructure are reshaping the global economy.</li>
</ul>
<ul>
<li>You&#39;re an expert in financial modelling, valuation, and supporting high-impact transactions.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organisation are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $125,000 to $155,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits programme (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
</ul>
<ul>
<li>100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Programme (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data centre locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritise a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialised skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, colour, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$125,000 to $155,000</Salaryrange>
      <Skills>generative AI, technology, digital infrastructure, cloud infrastructure, data centres, M&amp;A processes, financial analysis, financial models, due diligence, valuation, documentation, integration planning, cross-functional Ad Hoc finance projects, company management presentations, investment thesis, growth strategy, comprehensive due diligence, operational analysis, industry trends, competitive landscape, market dynamics, highly impactful, complex, visible projects, scaled, equity and debt fundraising initiatives, special strategic projects, ad hoc modelling, international expansion, JV partnerships, investment banking, private equity, private credit, financial modelling, valuation skills, complicated financial analyses, verbal and written communication skills, self-teaching, intellectual motor, analytical skills, quantitative analysis, qualitative analysis, problem-solving skills, attention to detail, organisational skills, Microsoft Office Suite, Excel, PowerPoint, deal sourcing/structuring, execution, leveraged buyout models, private/public credit, high-impact transactions, AI, cloud, global economy</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for building and scaling AI. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4440958006</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c7dc0cb-87c</externalid>
      <Title>Solutions Architect - Storage</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers succeed with our cloud infrastructure offerings. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on storage technologies within high-performance compute (HPC) environments.</p>
<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Storage product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>cloud computing concepts, architecture, technologies, storage solutions, Kubernetes, cloud infrastructure, high-performance compute (HPC), storage technologies, file system protocols, infrastructure systems, code contributions to open-source inference frameworks, scripting and automation related to storage technologies, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4568531006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be766cd7-8e2</externalid>
      <Title>Staff Software Engineer, Backend (Iasi)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5030292008</Applyto>
      <Location>Iasi, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2ff13306-80c</externalid>
      <Title>Staff+ Software Engineer, Backend</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is looking for experienced, product-minded engineers to own the backend systems that power user experiences across our API, Claude Code, and Claude.ai.</p>
<p>You&#39;ll independently scope complex, multi-month projects through ambiguous problem spaces and lead peers through technical and product decisions; you&#39;ll drive alignment with product, peer engineering teams, and research to identify capability gaps and translate frontier model improvements into shipped products.</p>
<p>You&#39;ll make architectural decisions that affect the reliability and scalability of systems serving hundreds of thousands of global users (including internal teams), and design processes that help your team operate effectively and never fail the same way twice - all while staying hands-on with the code and our models.</p>
<p><strong>Teams</strong></p>
<p>We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organisational needs.</p>
<ul>
<li>API Core: You&#39;ll build and scale the foundation of the Claude API,the systems that deliver Claude&#39;s intelligence to every developer, from startups to enterprise.</li>
<li>API Capabilities: You&#39;ll bring frontier model capabilities to developers through the Claude API, owning core features like vision, tool use, and computer use.</li>
<li>API Knowledge: You&#39;ll focus on transforming Claude into a true knowledge worker by ensuring the model has access to and understanding of the right knowledge at the right time.</li>
<li>Developer Experience: You’ll focus on building products and tools to enable developers to harness the full power of LLMs to create successful, reliable, and groundbreaking applications with ease.</li>
<li>API Agents: You&#39;ll focus on building the infrastructure and APIs that enable developers to create powerful agentic applications within the Claude API.</li>
<li>Enterprise Foundations: We&#39;re looking for a software engineer to join our Enterprise Foundations team,the team that makes Claude enterprise-ready at scale.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have 8+ years of relevant experience as a backend or product engineer, with a track record of leading complex, multi-month projects or teams as a tech lead or equivalent.</li>
<li>Have strong coding fundamentals and are comfortable working across backend systems, APIs, and integrations , and can reach into the frontend when needed to ship an effective solution.</li>
<li>Have led the design and delivery of large-scale backend systems in production that power high-adoption B2B or consumer-facing products.</li>
<li>Are skilled at driving alignment across technical and non-technical teams; you communicate clearly, influence technical decisions beyond your immediate team, and help others ramp effectively on your systems.</li>
<li>Take a product-focused approach to your work and care about building solutions that are robust, scalable, and easy to use.</li>
<li>Care deeply about investing in the mentorship and growth of your peers.</li>
<li>Have experience with distributed systems, API design, and cloud infrastructure at scale.</li>
<li>Thrive in fast-paced environments and can navigate ambiguity to find the highest-leverage path forward.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Served as a technical lead or architect on a product or API platform, owning both the technical vision and execution end-to-end.</li>
<li>Experience designing and scaling APIs with a focus on developer experience, consistency, and reliability , including API design review processes.</li>
<li>Deep experience building enterprise SaaS platforms, including permissions infrastructure, billing and pricing systems, or compliance frameworks for regulated industries (SO2, HIPAA).</li>
<li>Background in a specific industry vertical , financial services, healthcare, or legal technology , with a track record of building products that handle sensitive, domain-specific data.</li>
<li>Experience partnering with ML/AI research teams to productize model capabilities or identify and address model failure modes in production.</li>
<li>Experience building agentic systems, orchestration frameworks, or developer tools , including CLI tools, IDE integrations, or AI-assisted coding environments.</li>
<li>Experience building products where adoption and activation are core challenges , instrumenting funnels, diagnosing drop-off, and shipping the product changes that close gaps.</li>
<li>Experience designing operational processes (incident response, on-call rotations, postmortem review) for production systems serving large-scale developer or enterprise audiences.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>backend systems, APIs, cloud infrastructure, distributed systems, API design, product development, team leadership, communication, influence, mentorship, growth, API design review processes, enterprise SaaS platforms, permissions infrastructure, billing and pricing systems, compliance frameworks, regulated industries, ML/AI research teams, agentic systems, orchestration frameworks, developer tools, CLI tools, IDE integrations, AI-assisted coding environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174755008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0c1f85bb-c33</externalid>
      <Title>Senior Product Manager, Compliance</Title>
      <Description><![CDATA[<p>CoreWeave is building the infrastructure that powers the next era of AI. As we scale towards and beyond public company readiness, the CIO organisation is responsible for owning the execution of IT General Controls (ITGCs) and IT application controls across our technology environment.</p>
<p>We are looking for a Senior Product Manager, IT SOX Compliance to join our team. This is not a traditional audit-support role. As the Product Manager, IT SOX Compliance, you will translate SOX compliance requirements into structured programs, drive accountability across IT process owners, and build the systems and workflows that make compliance scalable.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the end-to-end IT SOX compliance program within the CIO organisation, maintaining the IT control inventory spanning ITGCs, IT-dependent controls, and automated application controls</li>
<li>Owning the control design and documentation, including narratives and risk and control matrices (RCMs), ensuring controls are clearly defined and audit-ready</li>
<li>Partnering with IT, Accounting (where needed), and the SOX team to ensure new systems and modules are implemented with appropriate SDLC controls in place prior to go-live; reviewing control designs to identify and mitigate SOX risks</li>
<li>On an ongoing basis, partnering with IT process owners and control operators to ensure controls are executed in a timely manner</li>
<li>Reviewing control evidence for quality and completeness before submission to auditors</li>
<li>Managing the full deficiency lifecycle , from root cause analysis through remediation planning, retesting, and escalation , reporting control health to IT leadership and the SOX team</li>
<li>Leading root cause analysis for control failures and incidents, tracking and resolving systemic gaps, and implementing and validating remediation plans to prevent recurrence</li>
</ul>
<p>You will work closely with the SOX team and IT process owners to ensure controls are designed, reviewed, and evidenced effectively.</p>
<p>The ideal candidate will have 8+ years of experience in IT audit, IT risk, IT compliance, or a related field, with hands-on IT SOX experience in either a practitioner or oversight capacity. You will have deep familiarity with IT General Controls (ITGCs) , access management, change management, SDLC, and computer operations , and how they map to financial reporting risk.</p>
<p>In addition to a competitive salary declaration, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, ability to participate in employee stock purchase program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core, Act Like an Owner, Empower Employees, Deliver Best-in-Class Client Experiences, Achieve More Together.</p>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organisation&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>IT General Controls (ITGCs), SOX compliance, IT audit, IT risk, IT compliance, Access management, Change management, SDLC, Computer operations, Workday, Salesforce, NetSuite/SAP, Coupa, GRC platforms, AuditBoard, ServiceNow GRC, Workiva, CISA, CISSP, CISM, CPA, Hyperscaler, Cloud infrastructure, High-growth tech environment</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4673532006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25f98af1-025</externalid>
      <Title>Senior Marketing Manager, Americas</Title>
      <Description><![CDATA[<p>As a Senior Marketing Manager, Americas, you will be a core operator in CoreWeave&#39;s field marketing engine. You will own the day-to-day design and execution of regional go-to-market programs that create and accelerate pipeline and revenue across CoreWeave&#39;s priority markets.\n\nYou will translate global strategy into targeted, sales-aligned programs focused on strategic accounts and high-value opportunities. You will lead regional planning and execution across field events, executive and CXO programs, and high-touch account-based marketing,working in tight alignment with Sales to drive deal progression, deepen executive relationships, and advance late-stage pipeline.\n\nOperating as a precision execution arm of Marketing, you will partner with Demand Generation, Product Marketing, Events, and Partner Marketing to convert scaled demand into tangible regional revenue outcomes.\n\nThe ideal candidate will have 5+ years of experience in B2B marketing at a high-growth technology or enterprise company, with experience in field marketing, executive programs, strategic events, or ABM supporting the Americas market.\n\nKey responsibilities include:\n\n<em> Owning end-to-end execution of complex, multi-day regional programs from brief and agenda design through audience targeting, invitations, logistics, on-site delivery, and post-program follow-up.\n\n</em> Designing and running executive-level (VP/C-Suite) programs that can be tied to pipeline creation, opportunity progression, and deal velocity, with clear pre/post-program goals and reporting.\n\n<em> Translating global or corporate marketing strategy into regional go-to-market plans, including selecting the right mix of field events, executive experiences, and account-based programs to support territory and account goals.\n\n</em> Building and operationalizing target account and contact lists, including nomination criteria, coverage plans, and alignment with Sales territories, segments, and opportunity stages.\n\n<em> Setting and tracking program KPIs using CRM and marketing systems.\n\n</em> Working as a day-to-day marketing partner to Sales in-region, participating in pipeline reviews, account planning, and forecast discussions to prioritize programs that support late-stage pipeline and strategic accounts.\n\nRequired skills include:\n\n<em> 5+ years of experience in B2B marketing at a high-growth technology or enterprise company.\n\n</em> Experience in field marketing, executive programs, strategic events, or ABM supporting the Americas market.\n\n<em> Strong project management and execution skills.\n\n</em> Excellent communication and interpersonal skills.\n\n<em> Ability to work in a fast-paced environment and prioritize multiple tasks and projects.\n\nPreferred skills include:\n\n</em> Experience with AI, cloud infrastructure, data, or developer-centric products.\n\n<em> Familiarity with marketing automation and CRM systems.\n\n</em> Experience working with cross-functional teams.\n\nExperienceLevel: senior</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$115,000 to $168,000</Salaryrange>
      <Skills>B2B marketing, Field marketing, Executive programs, Strategic events, Account-based marketing, Project management, Communication, Interpersonal skills, AI, Cloud infrastructure, Data, Developer-centric products, Marketing automation, CRM systems, Cross-functional teams</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4663434006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fde2d89-11c</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Econometrics, Statistics, Quantitative social science research, Large language models, AI systems, ML research workflows, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Problem-solving skills, Collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b0ef8d51-d38</externalid>
      <Title>Sr. Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a talented and experienced Senior Software Engineer who is passionate about building high-quality, scalable web &amp; desktop native applications using modern frontend and backend technologies.</p>
<p>As a Senior Software Engineer, you will own significant features end-to-end, tackle technical hurdles, and enrich the team through your engineering experience, including mentorship of junior engineers.</p>
<p>You will guide projects with multiple engineers collaborating to deliver major features. You will work jointly in a cross-functional team, including working closely with Product Managers to advocate for technical initiatives for the team.</p>
<p>This position reports to our Engineering Manager, who is based in London, and is looking for someone to join the team in our London office.</p>
<p>Please note, this is a hybrid position with an expectation to be in the office 2-3 times per week.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain Dialpad&#39;s web &amp; desktop applications using modern technologies.</li>
</ul>
<ul>
<li>Write clear and complete architectural design documents that other team members can easily understand.</li>
</ul>
<ul>
<li>Provide estimates on technical resources and requirements necessary to plan and begin projects.</li>
</ul>
<ul>
<li>Develop and maintain the WFM web application and services using modern technologies.</li>
</ul>
<ul>
<li>Write clean, modular, and maintainable code using best practices along with unit tests.</li>
</ul>
<ul>
<li>Participate in code reviews to ensure code quality, maintainability, and scalability.</li>
</ul>
<ul>
<li>Ensure that features are shipped on time and with the highest quality.</li>
</ul>
<ul>
<li>Take on-call activities to support and resolve issues arising from QA and customers.</li>
</ul>
<ul>
<li>Be responsible for deploying new releases on a weekly release cadence.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to build and use standard components and practices across Dialpad products.</li>
</ul>
<ul>
<li>Mentor junior engineers and help them grow their skills and expertise.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of experience in software engineering.</li>
</ul>
<ul>
<li>Strong experience with Python/TypeScript or other languages, Vue/React, Node.js, HTML, CSS, JavaScript, GraphQL, and cloud infrastructures [Google Cloud Platform is a plus].</li>
</ul>
<ul>
<li>Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent them.</li>
</ul>
<ul>
<li>Experience with databases, SQL/NoSQL.</li>
</ul>
<ul>
<li>Experience with building reusable and modular components, both frontend and backend.</li>
</ul>
<ul>
<li>Experience with mentoring junior engineers and helping them grow their skills.</li>
</ul>
<ul>
<li>Experience with highly agile and iterative development processes.</li>
</ul>
<ul>
<li>Strong debugging and troubleshooting skills.</li>
</ul>
<ul>
<li>Strong communication and collaboration skills.</li>
</ul>
<p>Why Join Dialpad:</p>
<ul>
<li>Work at the center of the AI transformation in business communications.</li>
</ul>
<ul>
<li>Build and ship agentic AI products that are redefining how companies operate.</li>
</ul>
<ul>
<li>Join a team where AI amplifies every employee’s impact.</li>
</ul>
<ul>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, Vue, React, Node.js, HTML, CSS, JavaScript, GraphQL, cloud infrastructures, performance and optimization, databases, SQL/NoSQL, modular components, agile and iterative development processes, debugging and troubleshooting, communication and collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8397034002</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a7d182d-c49</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your primary responsibility will be to serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a proven track record of working as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will also need to have fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>In addition, you will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $225,000 SGD</Salaryrange>
      <Skills>Cloud computing concepts, Kubernetes solutions, High-performance compute (HPC) environments, Distributed systems, Cloud infrastructure, Code contributions to open-source inference frameworks, Scripting and automation related to Kubernetes clusters and workloads, Building solutions across multi-cloud environments, Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649036006</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>489d4d8c-49e</externalid>
      <Title>Solutions Architect, AI/Cloudflare Developer Platform</Title>
      <Description><![CDATA[<p>As a Solutions Architect, Cloudflare AI / Developer Platform and a member of the sales team, you will help customers understand the value proposition of the Cloudflare Developer Platform and demonstrate how to effectively build applications with our products.</p>
<p>Every day as a Solution Architect is different. You will utilize both technical and business skills to advise customers and sales teams, support strategic opportunities, architect innovative solutions, and develop proofs of concept / demonstrations.</p>
<p>Your technical knowledge of Cloudflare&#39;s products and system design will be vital to designing solutions that meet our customers&#39; needs and expectations. Serving as a trusted technical advisor, Solution Architects guide and enable clients, partners, and teams within Cloudflare on product capabilities, positioning and competitive intelligence.</p>
<p>You will form a tight feedback loop with product, product marketing, and technical pre-sales to refine and evolve our products.</p>
<p>The ideal candidate possesses a consultative mindset, demonstrable success working with customers, and deep, practical knowledge of modern web technologies, cloud architecture, and experience building on a distributed serverless platform.</p>
<p>No matter your background, you have natural curiosity and desire to solve problems, achieve goals, and design the most elegant and efficient solutions to address client needs.</p>
<p>A successful Solution Architect at Cloudflare is able to act as a trusted advisor for our customers, while balancing the technical and business needs of the role – actively building and regularly presenting technical solutions to varied audiences.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with the sales organization to drive revenue, new customers and pipeline of AI and Developer Platform solutions.</li>
</ul>
<ul>
<li>Lead technical discovery with customers and jointly architect best practice solutions to meet customer needs.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams including product management, sales, and marketing to drive developer platform revenue and customer adoption.</li>
</ul>
<ul>
<li>Present to strategic customers as an expert of our Developer Platform solutions.</li>
</ul>
<ul>
<li>Align Director and C-Level perceived business and technical value with Cloudflare developer solutions.</li>
</ul>
<ul>
<li>Provide succinct feedback to cross-functional teams to deliver relevant Developer content, use cases, customer stories, and data driven value propositions.</li>
</ul>
<p>Skill Requirements:</p>
<ul>
<li>5+ years of experience selling or supporting technical sales in the cloud computing industry.</li>
</ul>
<ul>
<li>Deep technical expertise across cloud infrastructure and AI/ML , you have built production systems that combine both as a solutions engineer, entrepreneur, or solution architect.</li>
</ul>
<ul>
<li>In-depth knowledge of at least one major public cloud provider (e.g., AWS, GCP, Azure).</li>
</ul>
<ul>
<li>Practical knowledge and experience designing systems. You have built and deployed a production web application either professionally or as a hobbyist and are able to clearly articulate the design and explain the considerations / trade-offs.</li>
</ul>
<ul>
<li>Software development experience delivering full-stack applications, preferably using modern JavaScript frameworks, a variety of databases, and Serverless tooling.</li>
</ul>
<ul>
<li>Strong understanding of developer workflows (branching, versioning, CI/CD practices, system integrations).</li>
</ul>
<ul>
<li>Knowledge of key market players/competitors in the cloud computing, AI and storage spaces.</li>
</ul>
<p>Other desirable skills areas include:</p>
<ul>
<li>You’ve built something on Cloudflare Workers.</li>
</ul>
<ul>
<li>AWS Solutions Architect or GCP Cloud Architect certifications</li>
</ul>
<ul>
<li>Providing structured customer feedback to influence product direction.</li>
</ul>
<ul>
<li>Actively stay up-to-date with industry trends and advancements in cloud computing to inform product strategy and roadmap.</li>
</ul>
<p>Compensation:</p>
<p>This role is eligible to earn incentive compensation under Cloudflare’s Sales Compensation Plan. The estimated annual salary range includes the on-target incentive compensation that may be attained in this role under the Sales Compensation Plan.</p>
<p>For Bay Area based hires: Estimated annual salary of $212,000.00 - $292,000.00</p>
<p>Equity:</p>
<p>This role is eligible to participate in Cloudflare’s equity plan.</p>
<p>Benefits:</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits:</p>
<ul>
<li>Medical/Rx Insurance</li>
</ul>
<ul>
<li>Dental Insurance</li>
</ul>
<ul>
<li>Vision Insurance</li>
</ul>
<ul>
<li>Flexible Spending Accounts</li>
</ul>
<ul>
<li>Commuter Spending Accounts</li>
</ul>
<ul>
<li>Fertility &amp; Family Forming Benefits</li>
</ul>
<ul>
<li>On-demand mental health support and Employee Assistance Program</li>
</ul>
<ul>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits:</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
</ul>
<ul>
<li>Life &amp; Accident Insurance</li>
</ul>
<ul>
<li>401(k) Retirement Savings Plan</li>
</ul>
<ul>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off:</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
</ul>
<ul>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo:</p>
<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project:</p>
<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud infrastructure, AI/ML, public cloud provider, system design, modern web technologies, cloud architecture, distributed serverless platform, developer workflows, developer content, customer stories, data driven value propositions, Cloudflare Workers, AWS Solutions Architect, GCP Cloud Architect, structured customer feedback, industry trends, advancements in cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7505582</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd44a200-1ac</externalid>
      <Title>Director of Engineering (Service Foundations)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Service Foundations)</p>
<p>We are seeking a seasoned Director of Engineering to lead our Service Foundations team. As a key member of our executive engineering team, you will be responsible for building and operating distributed systems, driving company-wide efficiency, reliability, and automation.</p>
<p>In this role, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical infrastructure initiatives that integrate AI-driven tooling directly into the infrastructure itself to make it more adaptive, scalable, and intelligent.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Solve real business needs at a large scale by applying your software engineering expertise</li>
<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>
<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>
<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>
<li>Lead and participate in technical, product, and design discussions</li>
<li>Build, manage, and operate highly scalable services in the cloud</li>
<li>Grow leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partner with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Create a culture of excellence on the team while leading with empathy</li>
</ul>
<p>Requirements:</p>
<ul>
<li>20+ years of industry experience building and operating large-scale distributed systems</li>
<li>Proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads</li>
<li>Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions</li>
<li>Ability to translate requirements from internal engineering teams into clear priorities and execution plans</li>
<li>Fluent across the infrastructure stack , storage, orchestration, observability, and developer platforms , with intuition for how these layers interact</li>
<li>Ability to evaluate and evolve abstractions , knows when to unify, when to localise, and how to reduce cognitive load for product teams</li>
<li>BS in Computer Science (Masters or PhD preferred)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratise data, analytics, and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure systems, Distributed systems, Infrastructure as Code, Containerisation, Orchestration, Observability, Developer platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201768002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>73b18ca5-83e</externalid>
      <Title>Customer Systems Administrator</Title>
      <Description><![CDATA[<p>As a Customer Systems Administrator on the Customer Systems team within Dropbox&#39;s Go-To-Market organization, you will play a crucial role in platform administration, access governance, and operational reliability across the systems that power Dropbox&#39;s customer-facing operations.</p>
<p>Day to day, you&#39;ll be configuring and maintaining platforms across the customer lifecycle, enforcing security and access control policies, and driving AI-powered tooling and automations. You&#39;ll gain direct ownership of production systems that the business relies on daily, deep cross-functional exposure to Engineering and Customer Experience and Success, access to industry-leading AI tooling, and the opportunity to shape how customer tooling scales at Dropbox.</p>
<p>Responsibilities:</p>
<p>Manage and evolve the team&#39;s portfolio of support and post-sales platforms to meet the changing needs of the Customer Experience and Customer Success organizations.</p>
<p>Serve as a primary on-call responder, owning incident resolution and stakeholder communication during platform disruptions.</p>
<p>Drive the adoption and integration of AI-powered tools and automations across customer platforms to improve user efficiency and customer outcomes.</p>
<p>Partner with Engineering and Customer Experience and Success to translate business needs into platform solutions.</p>
<p>Identify and execute platform improvements that reduce operational friction, increase reliability, and scale with the business.</p>
<p>Build and maintain the team&#39;s automation tooling and scripting infrastructure.</p>
<p>Establish and maintain the documentation standards that keep the team operationally resilient, including SOPs, runbooks, and system guides.</p>
<p>Requirements:</p>
<p>4+ years of proven experience administering enterprise support or customer platforms in a production environment.</p>
<p>Proficiency with AI tools and scripting languages (e.g., Python, Bash, JavaScript), with a demonstrated comfort incorporating both into daily workflows to increase efficiency and output.</p>
<p>Demonstrated experience managing multi-platform environments.</p>
<p>Experience governing platform access , provisioning, role-based access control, and security policy enforcement.</p>
<p>Track record of shipping platform changes through structured processes: scoping, testing, communicating, and deploying without disruption.</p>
<p>Operational maturity with demonstrated experienced owning incident response, triaging escalations, and maintaining composure under pressure.</p>
<p>Ability to produce and maintain clear, structured documentation , SOPs, runbooks, and system guides that are accessible to both technical and non-technical audiences.</p>
<p>Preferred Qualifications:</p>
<p>Experience administering support platforms such as Zendesk, Amazon Connect, or similar contact center and ticketing systems.</p>
<p>Proven experience deploying AI tooling or automation frameworks to improve team workflows and operational efficiency.</p>
<p>Working knowledge of AWS services, cloud infrastructure fundamentals, and familiarity with modern data platforms such as Databricks.</p>
<p>Experience with sales platforms, supporting Sales &amp; Customer Success operations such as Highspot, Planhat, Outreach.</p>
<p>Platform admin certifications (e.g., Zendesk Admin, Salesforce Certified Admin).</p>
<p>Compensation:</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI tools, scripting languages, Python, Bash, JavaScript, multi-platform environments, platform access, role-based access control, security policy enforcement, incident response, triaging escalations, composure under pressure, clear documentation, SOPs, runbooks, system guides, Zendesk, Amazon Connect, AWS services, cloud infrastructure fundamentals, Databricks, Highspot, Planhat, Outreach, platform admin certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides a cloud-based file hosting service.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7768860</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0396ac1c-dad</externalid>
      <Title>Senior Staff Engineer, Cloud Economics</Title>
      <Description><![CDATA[<p>Reddit is a community of communities. It&#39;s built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet.</p>
<p>The Ads Foundations organization is responsible for the technical backbone powering Ads Monetization at scale. Within this ecosystem, efficient resource utilization is critical.</p>
<p>We are seeking a Senior Staff Engineer to serve as the Cloud Resources Technical Owner for the Ads Domain. You will be the primary engineering point of contact for the Senior Director in Ads and Cloud Operations/Resources (COR &amp; Opex) stakeholders.</p>
<p><strong>Responsibilities</strong></p>
<p>Technical Vision &amp; Strategy</p>
<ul>
<li>Define and drive the technical strategy for Cloud Resource management within Ad first, ensuring that cost accountability is built into the architecture of our systems.</li>
<li>High-Fidelity Investment Modeling: Elevate cloud estimation from guesswork to a rigorous engineering discipline. You will lead the high-quality forecasting of new cloud investments and efficiency projects, designing data-driven models to validate technical ROI before builds happen</li>
<li>Design and implement a roadmap for Cost Observability 2.0, moving beyond simple reporting to real-time, service/team-level spend attribution and automated anomaly detection.</li>
</ul>
<p>Engineering &amp; Tooling Leadership</p>
<ul>
<li>Design and build internal platforms that programmatically enforce PnL accountability. You will engineer (or collaborate with Core Infrastructure partners) to deliver the dashboards, alerts, and governance tools that every Ads team relies on to manage their cloud footprint.</li>
<li>Architect automated frameworks for validating cost estimates and forecasting, replacing manual spreadsheets with data-driven software solutions.</li>
</ul>
<p>Scale &amp; Optimization</p>
<ul>
<li>Fight for observability by instrumenting deep telemetry into our cloud infrastructure. You will be hands-on in identifying inefficiencies (e.g., underutilized clusters, uncompressed data flows) and re-architecting critical paths for cost reduction.</li>
<li>Lead the technical validation of vendor and 3rd-party tool integration, ensuring we extract maximum engineering value from every dollar spent.</li>
</ul>
<p>Cultural &amp; Technical Stewardship</p>
<ul>
<li>Act as a role model for the Ads domain and the wider company. You will set the standard for how engineering teams think about Cost as a Non Functional Requirement, eventually scaling these patterns to other domains.</li>
<li>Partner with Finance and Engineering leadership to translate Cloud Spend into actionable engineering tasks (e.g., refactor Service X to use Spot instances).</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience, with a strong focus on public cloud infrastructure (AWS/GCP/Azure) and large-scale distributed systems.</li>
<li>Engineer-First Mindset: You are comfortable writing code (Go, Python, Java) to solve infrastructure problems. You don&#39;t just ask for a report; you build the API that generates it.</li>
<li>Deep Cloud Expertise: You have mastery over Kubernetes, container orchestration, and cloud-native storage, understanding exactly how architectural choices impact the bottom line.</li>
<li>Operational Excellence: Proven track record of building observability pipelines (Prometheus, Grafana, Datadog) that drive operational and financial alerts.</li>
<li>Influential Leader: Skilled at driving clarity in ambiguous spaces. You can convince a Principal Engineer to refactor their service for cost efficiency because you can prove the technical and business value.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience building custom FinOps tooling or internal developer platforms.</li>
<li>Background in performance engineering or capacity planning for high-traffic ad tech environments.</li>
<li>Contributions to open-source projects related to cloud efficiency or observability.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$232,500-$325,500 USD</Salaryrange>
      <Skills>public cloud infrastructure, large-scale distributed systems, Kubernetes, container orchestration, cloud-native storage, observability pipelines, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7628291</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>755c5895-997</externalid>
      <Title>Manager, Product Engineering</Title>
      <Description><![CDATA[<p>At Instabase, we&#39;re committed to democratizing access to cutting-edge AI innovation. Our market opportunity is vast, with customers representing some of the largest and most complex organisations in the world. As an Manager, Product Engineering, you will lead a team responsible for the full-stack development of enterprise software, working closely with cross-functional teams to design and deliver high-impact solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Team Leadership – Build, manage, and develop a team of high-performing engineers, providing mentorship and career development while fostering a collaborative and inclusive culture.</li>
<li>Cross-Functional Collaboration – Partner with product, design, and technical writing teams to define the roadmap and drive execution.</li>
<li>End-to-End Execution – Oversee the entire software development lifecycle, from capacity planning and roadmapping to prototyping and production deployment.</li>
<li>Technical Leadership – Contribute to technical discussions and architectural decisions within your product area.</li>
<li>Quality &amp; Operational Excellence – Establish and uphold best practices to maintain a high-quality bar for all deliverables, ensuring reliability, scalability, and usability.</li>
<li>Innovation &amp; AI Integration – Leverage modern AI tools to improve team productivity and enhance product capabilities.</li>
</ul>
<p>About You:</p>
<ul>
<li>Experience – 5+ years of engineering management experience, with a track record of building and leading high-performing teams.</li>
<li>AI &amp; Data Expertise – Strong background in AI, ML, and data-driven products, with experience building and scaling intelligent applications.</li>
<li>Startup Mentality – Comfortable operating in a fast-paced startup environment, navigating ambiguity, and driving impactful results.</li>
<li>Technical Proficiency – Deep knowledge of modern technology stacks, including cloud infrastructure, container orchestration systems, TypeScript, React, and related tools.</li>
<li>SaaS &amp; Enterprise Experience – Proven ability to deliver SaaS-based enterprise software solutions at scale.</li>
<li>Process &amp; Productivity – Experience implementing SDLC, and leveraging modern productivity software (Jira, Confluence, Figma, etc.).</li>
<li>AI-Driven Development – Passion for integrating modern AI tools to optimise development workflows.</li>
</ul>
<p>Compensation: The base salary range for this role is $280,000 to $300,000 + bonus, equity, and benefits.</p>
<p>Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
<li>Lunch on Us: Enjoy a lunch credit when you’re in the office.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$280,000 to $300,000 + bonus, equity, and benefits</Salaryrange>
      <Skills>AI, ML, data-driven products, cloud infrastructure, container orchestration systems, TypeScript, React, SaaS-based enterprise software solutions, SDLC, productivity software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase is a global company with offices in San Francisco and Bengaluru, offering a consumption-based pricing model for customers to access its AI Hub platform features.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8419974002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3439b4ff-d42</externalid>
      <Title>Engineering Manager, HADR</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Engineering Manager to join our High Availability and Disaster Recovery team. As a key member of our team, you will help develop our global architecture by combining less-available components and data centers into a highly available and resilient whole. You will work on latency-critical solutions where every millisecond matters and data redundancy is a hard requirement. Your work will enable Stripe to increase the GDP of the internet by providing uptime and data protection which have historically been impossible.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and manage a team of talented engineers on the team, providing mentorship, guidance, and support to ensure their success.</li>
<li>Drive the execution of projects, overseeing the entire development lifecycle from planning to delivery, while maintaining high standards of quality and timely completion.</li>
<li>Help influence peers / managers and build consensus while dealing with ambiguity</li>
<li>Build your team - formalizing role definitions, defining charter and ownership boundaries and taking a newly formed team into a high-functioning one</li>
</ul>
<p>Who you are: We&#39;re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>
<p>Minimum requirements:</p>
<ul>
<li>4+ years of software development experience</li>
<li>2+ years of cloud development or management experience</li>
<li>Professional working proficiency in English</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development experience, cloud development or management experience, English language proficiency, distributed system concepts, high-availability systems, chaos engineering, disaster recovery design, cloud infrastructure, multi-region deployments, document databases, MongoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7657997</Applyto>
      <Location>US Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>01102ded-ef1</externalid>
      <Title>Senior Manager, Technical Solutions Manager</Title>
      <Description><![CDATA[<p>The Customer Experience (CX) Organisation at CoreWeave is dedicated to ensuring every client running AI workloads at scale has a seamless, reliable, and high-performance experience.</p>
<p>We are on the search for a remarkable Senior Manager of Technical Solutions Management (TSM) who shares our passion and has an understanding of GPU infrastructure and AI Applications to join the team.</p>
<p>This critical leadership role will oversee the TSM function, which is responsible for managing technical relationships with strategic customers, defining and delivering on technical requirements, and driving the execution of complex programs from concept to successful completion.</p>
<p>In this role, you will:</p>
<ul>
<li>Lead the TSM function within CoreWeave by building, leading, and focusing on hiring top talent and fostering their growth to ensure they excel as the primary technical advocates for CoreWeave&#39;s most strategic customers</li>
</ul>
<ul>
<li>Collaborate across functions, working closely with leaders in Solutions Architecture, Support, Sales, and Product Engineering to elevate and enhance the CoreWeave customer experience</li>
</ul>
<ul>
<li>Directly engage and collaborate with key customers to understand their AI workloads, pain points, and future requirements to continuously improve our service offerings</li>
</ul>
<ul>
<li>Define and monitor key performance indicators (KPIs) to evaluate program success and effectiveness through leveraging multiple insights</li>
</ul>
<p>You will identify and eliminate inefficiencies, accelerate operational speed, and deliver exceptional results that reinforce CoreWeave&#39;s position as a market leader.</p>
<p>The base salary range for this role is $207,000 to $275,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000 to $275,000</Salaryrange>
      <Skills>GPU infrastructure, AI Applications, Cloud infrastructure, Kubernetes, High-performance computing, Leadership, Technical program management, Product management, Delivery management, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4646569006</Applyto>
      <Location>Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>438a5373-c98</externalid>
      <Title>Senior Manager, Data Center Logistics &amp; Inventory</Title>
      <Description><![CDATA[<p>Job Title: Senior Manager, Data Center Logistics &amp; Inventory</p>
<p>CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.</p>
<p>As the Global Logistics Data Center Operations Lead – (Americas or EMEA), you will:</p>
<ul>
<li>Lead end-to-end data center logistics operations for your region, spanning ramp and sustaining sites: dock-to-cage, storerooms, in-hall material flows, spares, and RMAs.</li>
</ul>
<ul>
<li>Coordinate and develop Inventory Control Specialists (ICS) / Logistics Technicians, Regional Managers, and all DC logistics IC team members within your region.</li>
</ul>
<ul>
<li>Translate build and M&amp;O plans into DC logistics capacity, coverage, and staffing (by site and by mode: ramp vs sustain).</li>
</ul>
<ul>
<li>Own DC storeroom performance: inventory accuracy, pick accuracy, dock-to-stock, order-to-“ready for install”, chain-of-custody, and audit readiness.</li>
</ul>
<ul>
<li>Standardize and enforce SOPs, playbooks, and visual management for data center logistics across your region, aligned to global GLO standards.</li>
</ul>
<ul>
<li>Partner closely with the Global Logistics Warehouse &amp; FSL Operations Lead to ensure frictionless handoffs from warehouses/FSLs into DC logistics (crate handling, documentation, timing, and proof of delivery).</li>
</ul>
<ul>
<li>Drive 3PL/4PL and white-glove performance at the DC interface (dock-to-cage, JIT deliveries, crate handling, returns) for your region.</li>
</ul>
<ul>
<li>Lead regional cadence reviews with DC Ops, GLO, Security, and other stakeholders; publish KPI packs; own RCA/CAPA to closure for DC logistics issues.</li>
</ul>
<ul>
<li>Act as the primary DC logistics point of contact for your region with Data Center Operations leadership, aligning site-level needs with global logistics capabilities.</li>
</ul>
<ul>
<li>Mentor and grow frontline and regional DC logistics leaders and ICs, building a strong bench for future expansion.</li>
</ul>
<p>About the role:</p>
<p>We are looking for a hands-on, operations-minded Global Logistics Data Center Operations Lead to standardize, coordinate, and continuously improve data center logistics execution across your region.</p>
<p>In this role, you will own all in-data-center logistics functions in your region, including:</p>
<ul>
<li>ICS / Logistics Technicians and their managers (regional and site level).</li>
</ul>
<ul>
<li>Storeroom operations (receiving, tagging, binning, cycle counts, replenishment).</li>
</ul>
<ul>
<li>Data hall support logistics (kitting, pre-stage, JIT deliveries, crate handling, returns).</li>
</ul>
<ul>
<li>Spares and RMA flows in partnership with warehouses, FSLs, and 3PLs.</li>
</ul>
<p>You will work in tight partnership with:</p>
<ul>
<li>The Global Logistics Warehouse &amp; FSL Operations Lead, who owns the upstream warehouse/FSL network, WMS, and lease portfolio.</li>
</ul>
<ul>
<li>Data Center Operations, who own construction and run operations in the halls.</li>
</ul>
<ul>
<li>Security, Trade Compliance, IT/Systems, Finance, Procurement, and Real Estate/Legal to ensure DC logistics is safe, compliant, and scalable.</li>
</ul>
<p>This is a senior operations lead role with people leadership responsibilities at the regional level and strong cross-functional influence.</p>
<p>Who You Are:</p>
<ul>
<li>Bachelor’s degree in Supply Chain, Logistics, Operations Management, Industrial Engineering, Business, or a related field (or equivalent experience).</li>
</ul>
<ul>
<li>8+ years in data center logistics, warehouse/DC operations, or high-value hardware logistics, with at least 3–5 years leading multi-site teams.</li>
</ul>
<ul>
<li>Proven experience managing or leading ICS / logistics technicians / storeroom or DC logistics teams across multiple sites or a region.</li>
</ul>
<ul>
<li>Strong background in inventory control and serialized asset management (cycle counts, reconciliations, variance analysis, SOX/audit readiness).</li>
</ul>
<ul>
<li>Experience orchestrating ramp (build) and sustaining logistics modes at data centers or similarly complex environments.</li>
</ul>
<ul>
<li>Comfortable working with and improving WMS, ERP, and asset/CMDB tools; strong spreadsheet/BI skills.</li>
</ul>
<ul>
<li>Demonstrated ability to standardize SOPs and playbooks, stand up new sites, and drive behavioral adoption across teams.</li>
</ul>
<ul>
<li>Excellent cross-functional communicator, able to work with DC operations, engineering, security, finance, procurement, and vendors.</li>
</ul>
<ul>
<li>Willingness and ability to travel regionally (e.g., 25–40% depending on footprint and phase).</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience in AI, cloud infrastructure, or hyperscaler data center environments.</li>
</ul>
<ul>
<li>Experience partnering with 3PL/4PL providers, white-glove carriers, and regional warehouse networks.</li>
</ul>
<ul>
<li>Familiarity with 5S/lean, continuous improvement, and structured RCA/CAPA.</li>
</ul>
<ul>
<li>Experience mentoring frontline managers and ICs; demonstrated team-building and talent development track record.</li>
</ul>
<p>Why This Role Matters:</p>
<p>Data center logistics is the backbone that keeps builds on schedule and live environments stable. As the Global Logistics Data Center Operations Lead – , you:</p>
<ul>
<li>Ensure materials, spares, and RMAs flow cleanly into and out of data centers.</li>
</ul>
<ul>
<li>Tie day-to-day dock and storeroom work to measurable service outcomes for build and operations.</li>
</ul>
<ul>
<li>Act as the single-threaded owner for DC logistics performance in your region, enabling scale through repeatable standards.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $161,000 to $237,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<p>Healt</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$161,000 to $237,000</Salaryrange>
      <Skills>Supply Chain, Logistics, Operations Management, Industrial Engineering, Business, Inventory Control, Serialized Asset Management, WMS, ERP, Asset/CMDB Tools, Spreadsheet/BI Skills, Standardization, Playbooks, Visual Management, RCA/CAPA, Cross-Functional Communication, Leadership, Team Development, AI, Cloud Infrastructure, Hyperscaler Data Center Environments, 3PL/4PL Providers, White-Glove Carriers, Regional Warehouse Networks, 5S/Lean, Continuous Improvement, Structured RCA/CAPA, Mentoring, Frontline Managers, ICs, Team-Building, Talent Development</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that delivers a platform of technology, tools, and teams for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4652717006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f296b6b0-e66</externalid>
      <Title>Senior Software Security Engineer</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Security Engineer</p>
<p>About the Role: The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.</p>
<p>Responsibilities:</p>
<ul>
<li>Build security for large-scale AI clusters, implementing robust cloud security architecture including IAM, network segmentation, and encryption controls</li>
</ul>
<ul>
<li>Design secure-by-design workflows, secure CI/CD pipelines across our services, help build secure cloud infrastructure, with expertise in various cloud environments, Kubernetes security, container orchestration and identity management</li>
</ul>
<ul>
<li>Ship and operate secure, high-reliability services using Infrastructure-as-Code (IaC) practices and GitOps workflows</li>
</ul>
<ul>
<li>Apply deep expertise in threat modeling and risk assessment to secure complex multi cloud environments</li>
</ul>
<ul>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5-15+ years of software engineering experience implementing and maintaining critical systems at scale</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science/Software Engineering or equivalent industry experience</li>
</ul>
<ul>
<li>Strong software engineering skills in Python or at least one systems language (Go, Rust, C/C++)</li>
</ul>
<ul>
<li>Experience managing infrastructure at scale with DevOps and cloud automation best practices</li>
</ul>
<ul>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
</ul>
<ul>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<ul>
<li>Outstanding communication skills, translating technical concepts effectively across all organizational levels</li>
</ul>
<ul>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
</ul>
<ul>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<ul>
<li>Low ego, high empathy engineer who attracts talent and supports diverse, inclusive teams</li>
</ul>
<ul>
<li>Experience supporting fast-paced startup engineering teams</li>
</ul>
<ul>
<li>Passionate about AI safety and alignment, with keen interest in making AI systems more interpretable and aligned with human values</li>
</ul>
<p>Salary: The annual compensation range for this role is £240,000-£325,000 GBP.</p>
<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: £240,000-£325,000 GBP Required Skills:</p>
<ul>
<li>Cloud security architecture</li>
<li>IAM</li>
<li>Network segmentation</li>
<li>Encryption controls</li>
<li>Kubernetes security</li>
<li>Container orchestration</li>
<li>Identity management</li>
<li>Infrastructure-as-Code (IaC)</li>
<li>GitOps</li>
<li>Threat modeling</li>
<li>Risk assessment</li>
<li>DevOps</li>
<li>Cloud automation</li>
<li>Python</li>
<li>Go</li>
<li>Rust</li>
<li>C/C++</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Secure-by-design workflows</li>
<li>CI/CD pipelines</li>
<li>Secure cloud infrastructure</li>
<li>Cloud environments</li>
<li>Containerization</li>
<li>Identity and access management</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000-£325,000 GBP</Salaryrange>
      <Skills>Cloud security architecture, IAM, Network segmentation, Encryption controls, Kubernetes security, Container orchestration, Identity management, Infrastructure-as-Code (IaC), GitOps, Threat modeling, Risk assessment, DevOps, Cloud automation, Python, Go, Rust, C/C++, Secure-by-design workflows, CI/CD pipelines, Secure cloud infrastructure, Cloud environments, Containerization, Identity and access management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5022845008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32334977-1bd</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About Us</strong></p>
<p>Descript is on a mission to make audio and video content creation and editing fast, easy, and accessible to all. We are building a cutting-edge media editor incorporating real time collaboration, ground-breaking UX, and cutting-edge AI.</p>
<p><strong>Job Description</strong></p>
<p>As a Senior Infrastructure Engineer, you will drive projects that let engineers better understand and improve the performance, availability, and quality of what they ship. You will be owning and improving the core production infrastructure and building blocks upon which other engineers depend.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop technical and business solutions that enable engineers to improve the quality and reliability of product features and systems that they build.</li>
<li>Drive improvements to the reliability of our core infrastructure, such as production clusters, networking, databases, and observability systems.</li>
<li>Champion best practices during reviews of code, technical designs, and launch plans.</li>
<li>Own our incident management and fire drill processes.</li>
<li>Work with engineering leadership to set goals and prioritize production reliability.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years experience in production/site-reliability engineering OR 5+ years of server-side software engineering with an interest in working on core infrastructure</li>
<li>A solid understanding of at least two of: public cloud infrastructure, Linux systems administration, and DevOps tooling.</li>
<li>Basic coding skills to work on automation and technical guardrails.</li>
<li>Strong written and verbal communication skills, and the ability to collaborate with other functions</li>
<li>Experience mentoring engineers, including code reviews, architecture discussions, and leadership skills</li>
</ul>
<p><strong>Nice to Have’s</strong></p>
<ul>
<li>Experience with:</li>
</ul>
<p>+ TypeScript   + Kubernetes   + Google Cloud Platform   + Terraform</p>
<p>The base salary range for this role is $191K-$250K.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$191K-$250K</Salaryrange>
      <Skills>public cloud infrastructure, Linux systems administration, DevOps tooling, basic coding skills, strong written and verbal communication skills, TypeScript, Kubernetes, Google Cloud Platform, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has around 150 employees.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7500000003</Applyto>
      <Location>Remote, San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e394b0fa-2ba</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>
<p><strong>Strong candidates may also have experience with</strong></p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p><strong>Representative projects across the org</strong></p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p><strong>Deadline to apply</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Annual compensation range</strong></p>
<p>The annual compensation range for this role is £325,000-£390,000 GBP.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>Why work with us?</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust, high-performance distributed systems, machine learning systems, load balancing, request routing, traffic management, caching strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5097742008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e5a3deb2-908</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Engineer, Inference</p>
<p>About the Role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Responsibilities:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Willingness to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Note: The salary range for this role is €235,000-€295,000 EUR per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR per year</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>372999e8-579</externalid>
      <Title>Senior Software Engineer II, AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer II on the AI Workload Orchestration team, you will help build and operate CoreWeave&#39;s Kubernetes-native platform for admitting, scheduling, and operating AI workloads at scale.</p>
<p>This platform integrates multiple orchestration and scheduling frameworks such as Kueue, Volcano, and Ray to support modern AI training and inference workflows. It complements SUNK (Slurm on Kubernetes) by providing a Kubernetes-first, cloud-native orchestration layer with deep platform integration.</p>
<p>You will own meaningful components of the platform, drive reliability and performance improvements, and help scale the system as customer demand and workload complexity continue to grow.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and operate Kubernetes-native services for AI workload orchestration and scheduling</li>
<li>Own one or more platform components end-to-end, including design, implementation, testing, and on-call support</li>
<li>Improve scheduling latency, cluster utilization, and workload reliability through metrics-driven engineering</li>
<li>Contribute to architectural discussions across services and influence design decisions within the platform</li>
<li>Work closely with adjacent teams (CKS, infrastructure, managed inference) to ensure clean interfaces and integrations</li>
<li>Mentor junior engineers and raise the quality bar for code, design, and operations</li>
</ul>
<p>About the role:</p>
<ul>
<li>5–8 years of professional software engineering experience in distributed systems, cloud infrastructure, or platform engineering</li>
<li>Strong experience building production systems in Go (Python or C++ a plus)</li>
<li>Solid understanding of Kubernetes fundamentals, APIs, controllers, and operating services in production</li>
<li>Experience working with scheduling, resource management, or quota-based systems</li>
<li>Proven ability to improve system reliability and performance using data and operational metrics</li>
<li>Comfortable owning services in production and participating in on-call rotations</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience with Kubernetes-native orchestration frameworks such as Kueue, Volcano, Ray, Kubeflow, or Argo Workflows</li>
<li>Familiarity with GPU-based workloads, ML training, or inference pipelines</li>
<li>Knowledge of scheduling concepts such as quota enforcement, pre-emption, and backfilling</li>
<li>Experience with reliability practices including SLOs, alerting, and incident response</li>
<li>Exposure to AI infrastructure, HPC, or large-scale distributed compute environments</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Go, Distributed systems, Cloud infrastructure, Platform engineering, Scheduling, Resource management, Quota-based systems, Kueue, Volcano, Ray, Kubeflow, Argo Workflows, GPU-based workloads, ML training, Inference pipelines, SLOs, Alerting, Incident response, AI infrastructure, HPC, Large-scale distributed compute environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647595006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44adf646-ae7</externalid>
      <Title>OIC Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes. You will be tasked with stabilizing, optimizing, and making them exceptionally robust. Long-term, you will be the subject matter expert responsible for architecting and scaling our enterprise integration landscape. This is a high-impact role for someone who thrives on solving complex data challenges and wants to build the operational foundation that enables Anduril to scale its mission.</p>
<p>Stabilize &amp; Optimize: Dive deep into existing Oracle Fusion ERP integrations across manufacturing, supply chain, finance, and engineering systems. Diagnose root causes of instability, re-architect weak points, and implement robust error handling and monitoring to achieve mission-critical reliability.</p>
<p>Architect &amp; Build: Design and develop new, scalable enterprise integrations using Oracle Integration Cloud (OIC). Translate complex business requirements for product data, multi-level Bills of Material (BOMs), procurement, inventory, work orders, and financial transactions into resilient and efficient integration flows.</p>
<p>Own the Integration Lifecycle: Manage the end-to-end process from design and development through testing (unit, SIT, UAT) and deployment, utilizing CI/CD best practices. Proactively tune and maintain integrations to ensure peak performance as data volumes grow.</p>
<p>Ensure Data Integrity: Become the trusted expert on data transformation and mapping between systems. Implement rigorous validation and reconciliation logic to guarantee that our enterprise data is flawless across all systems.</p>
<p>Collaborate &amp; Influence: Act as the key technical partner to our ERP, Manufacturing, Supply Chain, and Finance teams. Clearly articulate technical designs, trade-offs, and progress to both engineering peers and business stakeholders, guiding them toward best-practice integration patterns.</p>
<p>Leverage Modern Oracle Cloud Tools: Utilize Oracle Visual Builder Cloud Service (VBCS) where appropriate to build lightweight user interfaces that enhance integration workflows, data validation, or operational dashboards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$126,000-$167,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud (OIC), Oracle Fusion ERP, RESTful APIs, SOAP web services, XSLT, XPath, complex data mapping, SQL, relational database concepts, Oracle Visual Builder Cloud Service (VBCS), Oracle Business Intelligence Cloud Connector (BICC), Oracle Cloud Infrastructure (OCI) services, PLM systems, Git-based source control, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5061445007</Applyto>
      <Location>Atlanta, Georgia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>00fdb2d7-cbe</externalid>
      <Title>Senior Product Designer</Title>
      <Description><![CDATA[<p>As a Senior Product Designer at CoreWeave, you will independently lead end-to-end design across CoreWeave Cloud Console, translating complex infrastructure and platform needs into intuitive, high-impact experiences for developers, engineers, and enterprise teams.</p>
<p>You&#39;ll bring strong design craft and sharp product judgment to a broad surface area, defining strategy, solving ambiguous problems, and consistently raising the quality bar across the product. This is a generalist role with real scope. You&#39;ll work across Cloud Console, touching everything from account management and billing to compute orchestration and observability, partnering closely with Product and Engineering to shape direction, not just execute it.</p>
<p>You&#39;ll articulate clear design rationale, advocate for user needs, and contribute meaningfully to a growing design team and culture. We&#39;re looking for a designer who is actively using AI tools in their design workflow today, not as a novelty, but as a core part of how they move faster, prototype smarter, and solve harder problems. Comfort with technical complexity and a bias toward experimentation are essential here.</p>
<p>You&#39;ll be joining a small, high-trust design team. The work is broad, the impact is real, and there&#39;s room to shape how design operates at one of the fastest-growing infrastructure companies in the world.</p>
<p>Who You Are:</p>
<ul>
<li>5+ years of experience as a versatile senior designer who can own all facets of design, including user research, UX, interaction design, prototyping, and UI design.</li>
</ul>
<ul>
<li>An excellent portfolio showcasing user-centered problem solving through interaction design, visual design, and systems thinking.</li>
</ul>
<ul>
<li>Deep proficiency with Figma and a solid understanding of building usable, accessible, and modular designs that scale.</li>
</ul>
<ul>
<li>Experience partnering with Product and Engineering to define strategy and solve complex problems for technical users in B2B, infrastructure, developer tools, or platform contexts.</li>
</ul>
<ul>
<li>Actively incorporates AI tools into your design workflow , using them to move faster, prototype smarter, and tackle harder problems.</li>
</ul>
<ul>
<li>Comfortable working with technical complexity and ambiguity; you help teams frame problems, not just resolve them.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience with front-end technologies such as HTML, CSS, or modern frameworks for prototyping or implementation.</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure, developer tools, or enterprise platform products.</li>
</ul>
<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>Care deeply about users: You love connecting with real users daily, helping them solve issues and understand good patterns for using our tools. You approach questions and requests with a kind, thoughtful tone that makes users feel appreciated and connected to our team.</li>
</ul>
<ul>
<li>Autonomous: You work well in a self-directed environment, proactively finding ways to improve processes and collaborate with team members or engaged users.</li>
</ul>
<ul>
<li>Curious and driven: You are eager to explore machine learning and learn more about the engineering stack and common ML workflows, solving problems in both fast-paced, short-term sprints and in larger, long-term projects.</li>
</ul>
<p>Why Us?</p>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for takeoff, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
</ul>
<ul>
<li>100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under the Immigration and Nationality Act, or (iv) a protected individual under the Immigration and Nationality Act, or (B) a foreign person exempt from the requirements of the Export Administration Regulations (EAR) or the International Traffic in Arms Regulations (ITAR).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Figma, User research, UX, Interaction design, Prototyping, UI design, Cloud infrastructure, Developer tools, Enterprise platform products, AI tools, Front-end technologies, HTML, CSS, Modern frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for AI development.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4669436006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA / Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cef75c41-8c3</externalid>
      <Title>Sr. Manager, Supply Chain Risk, Resilience &amp; Compliance</Title>
      <Description><![CDATA[<p>Job Title: Sr. Manager, Supply Chain Risk, Resilience &amp; Compliance</p>
<p><strong>About the Role:</strong></p>
<p>As the Senior Manager, Supply Chain Risk, Resilience &amp; Compliance, you will lead the strategy, governance, and roadmap across key programs that strengthen supply chain resilience, improve the internal control environment, and build scalable governance for business continuity and circularity.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Own the strategy, governance, and roadmap for supply chain risk management and resilience.</li>
<li>Design and mature the supply chain risk monitoring framework and control tower.</li>
<li>Establish risk indicators, escalation paths, reporting cadences, and mitigation governance across supplier, site, country, tariff, and other supply chain risk domains.</li>
<li>Partner with Supply Chain, Procurement, and Market Intelligence to translate risk insights into action and prioritization.</li>
<li>Define and track metrics to measure risk exposure, mitigation progress &amp; resilience maturity.</li>
</ul>
<p><strong>Controls, Compliance &amp; Business Continuity:</strong></p>
<ul>
<li>Own the governance model for supply chain related SOX controls, audit readiness, and process compliance.</li>
<li>Lead process and tooling improvements that strengthen control effectiveness and scalability.</li>
<li>Serve as the supply chain lead for business continuity and ISO-related coordination, including alignment on risks, dependencies, recovery requirements, and continuity planning.</li>
<li>Partner with Finance, Internal Audit, IT, and process owners to drive remediation, standardization, and ongoing compliance.</li>
<li>Develop dashboards and executive reporting for controls health, remediation status, and compliance performance.</li>
</ul>
<p><strong>Circularity &amp; Decommission Governance:</strong></p>
<ul>
<li>Own the strategy and governance for circularity &amp; decommission processes across sites.</li>
<li>Establish standardized internal processes from identification of waste through pickup, disposition, auditability, and reporting.</li>
<li>Drive cross-functional coordination across operations, sustainability, IT, finance, and external partners.</li>
<li>Oversee process adherence, audit mechanisms, and performance reporting for end-of-life asset management.</li>
<li>Define the reporting methodology and metrics for quarterly and annual sustainability outcomes, including processed, reused, recycled, and landfilled materials.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor’s degree in Supply Chain Management, Business, Engineering, Finance, Accounting, Information Systems, Operations, or a related field.</li>
<li>10+ years of experience across supply chain, procurement, operations, risk mgmt., compliance, internal controls, business continuity, sustainability, or related functions.</li>
<li>Experience building or scaling cross-functional programs, governance frameworks, and operating processes.</li>
<li>Strong experience with supply chain risk management, supplier risk, resilience, or operational risk programs.</li>
<li>Working knowledge of internal controls, audit readiness, remediation management, and process compliance.</li>
<li>Experience developing KPIs, dashboards, executive reporting, SOPs, and process documentation.</li>
<li>Strong cross-functional leadership and stakeholder management skills.</li>
</ul>
<p><strong>Preferred:</strong></p>
<ul>
<li>Team management and leadership experience</li>
<li>Experience in cloud infrastructure, data centers, semiconductors, hardware, manufacturing, or other capital-intensive operational environments.</li>
<li>Familiarity with supply chain risk monitoring platforms, control tower tools, or similar intelligence solutions.</li>
<li>Familiarity with business continuity frameworks and standards, including ISO 22301 concepts.</li>
<li>Experience with circularity, reverse logistics, decommission, IT asset disposition, or sustainability reporting.</li>
<li>Professional certifications in supply chain, audit, risk, compliance, or business continuity are a plus.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p><strong>Salary Range:</strong></p>
<p>The base salary range for this role is $161,000 to $237,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Experience Level:</strong> senior <strong>Employment Type:</strong> full-time <strong>Workplace Type:</strong> onsite <strong>Category:</strong> Operations <strong>Industry:</strong> Technology <strong>Salary Range:</strong> $161,000 to $237,000 <strong>Required Skills:</strong> supply chain risk management, supplier risk, resilience, operational risk programs, internal controls, audit readiness, remediation management, process compliance, KPIs, dashboards, executive reporting, SOPs, process documentation, cross-functional leadership, stakeholder management <strong>Preferred Skills:</strong> team management, leadership experience, cloud infrastructure, data centers, semiconductors, hardware, manufacturing, supply chain risk monitoring platforms, control tower tools, business continuity frameworks, ISO 22301 concepts, circularity, reverse logistics, decommission, IT asset disposition, sustainability reporting, professional certifications in supply chain, audit, risk, compliance, business continuity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$161,000 to $237,000</Salaryrange>
      <Skills>supply chain risk management, supplier risk, resilience, operational risk programs, internal controls, audit readiness, remediation management, process compliance, KPIs, dashboards, executive reporting, SOPs, process documentation, cross-functional leadership, stakeholder management, team management, leadership experience, cloud infrastructure, data centers, semiconductors, hardware, manufacturing, supply chain risk monitoring platforms, control tower tools, business continuity frameworks, ISO 22301 concepts, circularity, reverse logistics, decommission, IT asset disposition, sustainability reporting, professional certifications in supply chain, audit, risk, compliance, business continuity</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4664241006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09068d1f-b15</externalid>
      <Title>Territory Account Executive, iG&amp;E (Taiwan)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>This role within the mid-market segment focuses on both the acquisition of prospective customers, in addition to the expansion of existing customer accounts within the iGaming &amp; Entertainment industry. Within this mid-market segment, you will work a set of target accounts in the Digital Natives and or the Commercial sub-segments. This position targets companies with up to 2,500 employees or $1 billion in revenue.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets for the iGaming &amp; Entertainment Territory in Taiwan or any other ASEAN countries.</li>
<li>Drive new business acquisition (new customer logos), customer expansion (upsell and cross sell Cloudflare solutions), and renewal within your territory.</li>
<li>Build a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understand iGaming &amp; Entertainment customer use-cases and how they pair with Cloudflare’s portfolio solutions in order to identify new sales opportunities.</li>
<li>Craft and communicate compelling value propositions for Cloudflare services. Drive awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Effectively scale the territory with partners - Accurately forecast commercial outcomes by running a consistent sales process, including driving next step expectations and contract negotiations.</li>
<li>As a trusted advisor, build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</li>
<li>Network across different business units with each of your accounts, and multi-thread to identify and engage new divisional buyers.Position Cloudflare&#39;s platform in each of your target customers, including Cloudflare One and the Connectivity Cloud to realize our full potential in every customer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience handling Digital Natives and Commercial accounts in the iGaming &amp; Entertainment sector in Taiwan or any other ASEAN countries assigned.</li>
<li>Ability to speak Mandarin as you will be working with Taiwan customers.</li>
<li>Direct B2B sales experience, adept at new business acquisition and account management.</li>
<li>Possess experience selling technical, cloud-based products or services to iGaming &amp; Entertainment clients.</li>
<li>Working knowledge of the cloud infrastructure and security space.</li>
<li>Solid understanding of computer networking and Internet functioning.</li>
<li>Strong interpersonal communication skills (both verbal and written) and organizational skills.</li>
<li>Self-motivated with an entrepreneurial spirit.</li>
<li>Comfortable working in a fast-paced dynamic environment.</li>
<li>Willingness to travel frequently to visit customers and prospects.</li>
<li>Bachelor&#39;s degree or equivalent professional experience. Technical background in engineering, computer science, or MIS is advantageous.</li>
<li>Singaporean and Singapore PR is highly preferred.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>B2B sales experience, Digital Natives and Commercial accounts in the iGaming &amp; Entertainment sector, Mandarin language skills, Cloud-based products or services, Cloud infrastructure and security space, Computer networking and Internet functioning</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides internet infrastructure and security services to protect and accelerate any internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7789535</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>248927c8-76d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are looking for software engineers to join our Platform organisation. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and optimise the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines.</li>
<li>Partner closely with product teams to understand their development workflow and eliminate friction points.</li>
<li>Work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don&#39;t.</li>
</ul>
<p><strong>Platform Acceleration</strong></p>
<p>We work on maximising the developer productivity of product engineers at Anthropic. You&#39;ll help define performance quality and standard for the company, power the next gen of LLM-first products, and redefine best-in-class developer experience.</p>
<p><strong>Service Infra</strong></p>
<p>We build and maintain the core infrastructure that powers Anthropic&#39;s engineering organisation, from service mesh and observability systems to deployment pipelines and shared libraries.</p>
<p><strong>Multicloud</strong></p>
<p>We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers. We focus on cloud-agnostic tooling, cross-cloud networking, and multi-region deployments.</p>
<p><strong>Auth &amp; Identity</strong></p>
<p>We build and maintain the critical infrastructure that powers identity and authentication across Anthropic&#39;s product suite. We work closely with product teams, security, support, and trust &amp; safety as customers.</p>
<p><strong>Connectivity</strong></p>
<p>Our mission is to make Claude the most connected AI. We own the MCP proxy that routes every tool call and the OAuth and token management that keeps connections authenticated.</p>
<p><strong>API Distributability</strong></p>
<p>The Claude API today is a rapidly growing platform serving developers and enterprises at scale,but reaching the next tier of enterprise customers requires transforming how and where we deploy it.</p>
<p><strong>Platform Intelligence</strong></p>
<p>We build the training systems that adapt Claude to specific customer workloads. The core problem is task-specific adaptation: getting the right intelligence, cost, and latency profile for a particular use case, and building toward systems where that adaptation can deepen as the customer&#39;s usage grows.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>A minimum of 5 years of practical experience building backend product or platform systems,distributed systems, cloud-native products, developer tools, or external developer facing products.</li>
<li>Strong fundamentals in service-oriented architectures, networking, and systems design.</li>
<li>Proficiency in Python, Go, Rust, or similar systems languages.</li>
<li>Experience with cloud infrastructure (GCP, AWS, or Azure), container orchestration (Kubernetes), and/or multi-cloud networking.</li>
<li>Take full ownership of your work,from design through deployment and operations.</li>
<li>Can navigate ambiguity and make sound technical decisions independently,</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual compensation range: $320,000-\$320,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time.</li>
<li>Visa sponsorship: We do sponsor visas!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$320,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, cloud infrastructure, container orchestration, multi-cloud networking, service-oriented architectures, networking, systems design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157844008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3c6419c4-a9b</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimization and scaling distributed systems</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95061695-858</externalid>
      <Title>Director of Engineering, Media &amp; Entertainment (M&amp;E)</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Director of Engineering, Media &amp; Entertainment (M&amp;E) to lead the development of next-generation cloud platforms and tools that power modern content creation workflows. This role will drive the engineering strategy and execution for solutions that support visual effects (VFX), animation, rendering, and post-production pipelines used by studios, artists, and creative teams worldwide.</p>
<p>As a senior engineering leader, you will build and lead high-performing engineering teams responsible for designing scalable infrastructure, developer tools, and user-facing systems that enable creative professionals to run complex production workloads in the cloud. You will collaborate closely with product, design, infrastructure, and customer teams to translate real-world production workflows into reliable, high-performance software platforms.</p>
<p>This role combines deep engineering leadership with domain expertise in M&amp;E workflows, ensuring that the platform delivers exceptional performance, reliability, and usability for demanding creative workloads.</p>
<p><strong>Leadership &amp; Strategy</strong></p>
<p>-Build and scale high-performing engineering teams focused on cloud platforms for media production workloads including rendering, simulation, and content processing. -Recruit, mentor, and develop engineering managers and senior engineers while fostering a culture of innovation, accountability, and collaboration. -Define and execute the long-term engineering strategy for Media &amp; Entertainment products and services. -Partner with Product and Design leaders to translate industry workflows and customer needs into scalable platform capabilities. -Establish engineering best practices for reliability, security, observability, and operational excellence. -Drive roadmap alignment between engineering initiatives and strategic business objectives.</p>
<p><strong>Technical Leadership</strong></p>
<p>-Lead the design and development of scalable backend services, APIs, and developer interfaces that power M&amp;E cloud workflows. -Build platforms that support demanding workloads such as rendering, asset processing, and distributed compute pipelines. -Drive architecture decisions for cloud-native systems leveraging technologies such as Kubernetes, distributed services, and infrastructure-as-code. -Ensure the platform enables self-service provisioning, automation, and repeatable workflows for production pipelines. -Establish engineering standards around performance, scalability, and security for enterprise-grade SaaS/PaaS systems. -Oversee system reliability and operational readiness through clear SLOs, monitoring, and runbook-driven on-call practices.</p>
<p><strong>Product &amp; Workflow Collaboration</strong></p>
<p>-Work closely with product leadership to define technical requirements aligned with real customer workflows in animation, VFX, and media production. -Engage directly with studios, artists, and technical directors to understand pipeline challenges and incorporate feedback into product development. -Translate industry needs into clear engineering priorities and technical roadmaps. -Guide development teams through product milestones including specification, development, testing, and release. -Ensure engineering efforts balance customer requirements, technical feasibility, and business goals.</p>
<p>Customer and industry collaboration is critical in identifying workflow needs and transforming them into actionable development plans for engineering teams.</p>
<p><strong>Operational Excellence</strong></p>
<p>-Implement engineering processes that support scalable development, including CI/CD pipelines, testing strategies, and code review standards. -Manage development timelines and resource allocation across multiple engineering teams. -Track key operational and customer metrics including performance, reliability, and cost efficiency. -Drive continuous improvement in engineering productivity and system performance. -Partner with QA, support, and customer success teams to ensure high-quality releases and strong user satisfaction.</p>
<p><strong>Who You Are:</strong></p>
<p><strong>Required Qualifications</strong></p>
<p>-10+ years of software engineering experience, including leadership of engineering teams and managers -Proven experience building and scaling cloud-based platforms or distributed systems. -Strong understanding of cloud infrastructure, microservices architecture, and automation technologies. -Experience delivering enterprise SaaS or PaaS products used by external customers. -Excellent leadership, communication, and cross-functional collaboration skills. -Ability to operate strategically while remaining deeply technical and hands-on with architecture decisions.</p>
<p><strong>Preferred Qualifications</strong></p>
<p>-Experience building platforms or tools for Media &amp; Entertainment workflows such as VFX, animation, rendering, or post-production pipelines. -Familiarity with industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan. -Experience designing APIs, developer platforms, or automation frameworks used by technical users. -Knowledge of GPU-accelerated compute workloads and distributed rendering systems. -Experience working with Kubernetes, infrastructure-as-code, and large-scale cloud environments.</p>
<p><strong>What Success Looks Like</strong></p>
<p>-Engineering teams delivering reliable, scalable platforms used by media studios and creative teams globally. -Clear alignment between product vision, customer workflows, and engineering execution. -Platforms capable of supporting large-scale production workloads with high performance and reliability. -Strong engineering culture focused on innovation, collaboration, and operational excellence.</p>
<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match.</p>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<p>-Be Curious at Your Core -Act Like an Owner -Empower Employees -Deliver Best-in-Class Client Experiences -Achieve More Together</p>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$206,000 - $303,000</Salaryrange>
      <Skills>Cloud infrastructure, Microservices architecture, Automation technologies, Enterprise SaaS or PaaS products, Leadership, Communication, Cross-functional collaboration, Strategic decision-making, Media &amp; Entertainment workflows, VFX, animation, rendering, or post-production pipelines, Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan, APIs, developer platforms, or automation frameworks, GPU-accelerated compute workloads and distributed rendering systems, Kubernetes, infrastructure-as-code, and large-scale cloud environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) and machine learning (ML) workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4666156006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0fef4970-adb</externalid>
      <Title>Security Software Engineer - Crypto Services</Title>
      <Description><![CDATA[<p><strong>About the Job</strong></p>
<p>We&#39;re seeking a Security Software Engineer with a specialization in crypto services and key management to develop novel security tooling for securing our suite of products. The ideal candidate can develop, test, and debug embedded software with mission-critical security responsibilities.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems.</li>
<li>Design and develop resilient software supporting all phases of key handling on embedded systems - from key load through sanitization.</li>
<li>Develop thorough testing and qualification procedures for security-critical components.</li>
<li>Collaborate with cross-functional teams to identify specific security needs and implement solutions.</li>
<li>Conduct code reviews and ensure adherence to security best practices.</li>
<li>Stay updated on the latest security threats and technologies.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>2+ years of software development experience in some combination of Golang, Rust, or C/C++.</li>
<li>Experience selecting and utilizing embedded HSMs and Secure Elements.</li>
<li>Experience with CI/CD and test automation, including for mobile and embedded devices.</li>
<li>Experience debugging embedded systems using common test equipment - logic analyzers, oscilloscopes, etc.</li>
<li>Solid understanding of cybersecurity principles and practices.</li>
<li>Ability to obtain and hold a U.S. Secret security clearance.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Knowledge of security frameworks and compliance standards.</li>
<li>Experience in mobile development, specifically on Android platforms.</li>
<li>Familiarity with cloud infrastructure management (Terraform and/or AWS CDK).</li>
<li>Experience implementing solutions compliant with US Government key handling requirements.</li>
<li>Strong problem-solving and analytical skills.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>The salary range for this role is $126,000-$191,000 USD. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, family planning and parenting support, mental health resources, professional development, commuter benefits, relocation assistance, and a retirement savings plan.</p>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
<li>Please always verify communications:</li>
<li>Direct from Anduril: If you receive an email from one of our recruiters, it will only come from an @anduril.com address.</li>
<li>Via Agency Partner: If contacted by a recruiting agency for an Anduril role, their email will clearly identify their agency. If you suspect any suspicious activity, please verify the agency&#39;s authenticity by reaching out to contact@anduril.com.</li>
<li>Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not engage. Always confirm the sender&#39;s email domain is @anduril.com before providing any personal information or clicking on links.</li>
<li>What to Do If You Suspect Fraud: Should you encounter any questionable or fraudulent outreach claiming to be from Anduril, please report it immediately to contact@anduril.com. Your proactive caution is invaluable in protecting your personal information and upholding the security and trustworthiness of our recruitment efforts.</li>
</ul>
<p><strong>Data Privacy</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$126,000-$191,000 USD</Salaryrange>
      <Skills>Golang, Rust, C/C++, Embedded HSMs, Secure Elements, CI/CD, Test Automation, Mobile Development, Android Platforms, Cloud Infrastructure Management, Terraform, AWS CDK, US Government Key Handling Requirements, Cybersecurity Principles, Security Best Practices, Security Frameworks, Compliance Standards, Problem-Solving, Analytical Skills, Communication, Teamwork</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops novel security tooling for securing its suite of products.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5086919007</Applyto>
      <Location>Atlanta, Georgia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>753e9465-6a0</externalid>
      <Title>Senior Security Software Engineer, eBPF &amp; Security Sensors</Title>
      <Description><![CDATA[<p>We&#39;re seeking an exceptional engineer to join our Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call rotations</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>
<li>Have a track record of building and maintaining internal developer tools or security platforms</li>
<li>Have a strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
<li>Have experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>
<li>Have experience with infrastructure-as-code (Terraform, CloudFormation)</li>
<li>Have experience with query optimization for large datasets</li>
<li>Have experience building stable and scalable services on cloud infrastructure and serverless architectures</li>
<li>Can write maintainable and secure code in Python</li>
<li>Have experience working with security teams and translating requirements into technical solutions</li>
<li>Can lead technical projects with minimal guidance</li>
<li>Have a track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
<li>Can lead cross-functional security initiatives and navigate complex organizational dynamics</li>
<li>Have strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>
<li>Have demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Have strong systems thinking with the ability to identify and mitigate risks in complex environments</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Building security tooling from the ground up</li>
<li>Implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Detection engineering or security operations</li>
<li>SOAR platform or automation development</li>
<li>Data lake or database architecture</li>
<li>API design and internal platform creation</li>
<li>Applying ML/AI to security problems</li>
<li>Scaling security operations in a high-growth environment</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, test-driven software development, CI/CD, infrastructure-as-code, query optimization, cloud infrastructure, serverless architectures, building security tooling, implementing security monitoring solutions, detection engineering, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, applying ML/AI to security problems, scaling security operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108521008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a5ab59a7-dfa</externalid>
      <Title>Customer Experience Operations Analyst</Title>
      <Description><![CDATA[<p>We are seeking a Customer Experience Operations Analyst to join our growing Revenue Operations team. This role is crucial for developing the operational foundation of our Customer Experience organization, acting as the connective tissue between technical Solutions Architects, the Sales organization, and the operational teams essential for smooth business function.</p>
<p>This role will act as an operational backbone for our Customer Experience function, bringing structure and clarity to a complex, fast-moving environment. You will manage the systems, processes, and workflows that support customer delivery, onboarding, and overall operational excellence. This role requires close collaboration across Revenue Operations, Accounting, Engineering, and Customer Experience to deliver best-in-class customer experiences.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Managing workflows and data across systems such as Salesforce, JIRA, or similar platforms</li>
<li>Bridging technical and operational conversations, working effectively with engineers and business stakeholders</li>
<li>Analyzing data to turn insights into actionable recommendations</li>
<li>Developing and implementing process improvements to increase efficiency and effectiveness</li>
<li>Collaborating with cross-functional teams to ensure seamless customer experiences</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5-7 years of experience in customer experience/solutions operations, business operations, or a similar role,ideally in cloud infrastructure, SaaS, or AI/ML environments</li>
<li>Proven experience managing workflows and data across systems such as Salesforce, JIRA, or similar platforms</li>
<li>Strong ability to bridge technical and operational conversations, working effectively with engineers and business stakeholders</li>
<li>Analytical and systems-minded thinker, capable of turning data into actionable insight</li>
<li>Excellent written and verbal communication skills</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Previous experience with cloud infrastructure concepts (e.g., Kubernetes, container orchestration, and AI/ML infrastructure)</li>
<li>Familiarity with DevOps or AI/ML tooling (e.g., SUNK, Terraform, Helm, or related platforms)</li>
<li>Willingness to work in a hybrid environment (3 days per week in-office)</li>
</ul>
<p>If you&#39;re a motivated and detail-oriented individual who thrives in fast-paced environments, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$75,000 to $110,000</Salaryrange>
      <Skills>customer experience, operations, workflow management, data analysis, process improvement, collaboration, communication, cloud infrastructure, SaaS, AI/ML, DevOps, tooling</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4618254006</Applyto>
      <Location>Livingston, NJ / New York, NY / Philadelphia, PA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7ff84e75-3c7</externalid>
      <Title>SDR</Title>
      <Description><![CDATA[<p>Job Title: SDR</p>
<p>مام CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence.</p>
<p>As a Sales Development Representative (SDR), you will play a critical role in generating pipeline by identifying and qualifying new business opportunities within target accounts. You will partner closely with marketing and account executives to execute outbound campaigns and convert inbound interest into qualified meetings.</p>
<p>Responsibilities:</p>
<ul>
<li>Generate pipeline by identifying and qualifying new business opportunities within target accounts</li>
<li>Partner closely with marketing and account executives to execute outbound campaigns and convert inbound interest into qualified meetings</li>
<li>Conduct outbound outreach across email, phone, and LinkedIn with a high level of personalization</li>
<li>Meet or exceed qualified meeting quotas tied to pipeline and revenue goals</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proven understanding of SDR and lead development best practices and outbound prospecting strategies</li>
<li>Experience conducting outbound outreach across email, phone, and LinkedIn with a high level of personalization</li>
<li>Ability to meet or exceed qualified meeting quotas tied to pipeline and revenue goals</li>
<li>Experience using CRM systems (e.g., Salesforce) to track and manage lead activity and pipeline development</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience in a business development, SDR, or sales role within a technology or SaaS organization</li>
<li>Familiarity with sales engagement platforms (e.g., Salesloft, Outreach)</li>
<li>Experience working in or selling to the AI, machine learning, or cloud infrastructure space</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>Our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>The base salary range for this role is $60,000 to $65,000. The starting salary will be determined by job-related knowledge, skills, experience, and the market location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$60,000 to $65,000</Salaryrange>
      <Skills>Outbound prospecting, CRM systems, Sales engagement platforms, Business development, Lead development, Salesloft, Outreach, AI, Machine learning, Cloud infrastructure</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for AI development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649884006</Applyto>
      <Location>San Francisco, CA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0fceb87c-57b</externalid>
      <Title>PLM Development Manager</Title>
      <Description><![CDATA[<p>We are seeking a PLM Development Manager to lead our team developing Teamcenter customizations and integrations. As the PLM Development Manager, you will define the vision and strategy for PLM across the enterprise, set goals, policies, and processes that guide how PLM customization and integration work gets done company-wide.</p>
<p>Key responsibilities include:</p>
<p>Defining the vision and strategy for PLM across the enterprise Setting goals, policies, and processes that guide how PLM customization and integration work gets done company-wide Managing a department of developers, solution engineers, and technical leads Building and scaling high-performing teams Overseeing workforce planning and budget allocation across multiple programs Creating career frameworks, technical standards, and engineering practices Driving architectural decisions that impact PLM infrastructure, integrations, and scalability for the entire organization Partnering with senior leadership across Engineering, Operations, IT, and Product to align PLM strategy with company priorities Communicating technical strategies to C-suite audiences Overseeing multi-million dollar budgets for PLM licensing, infrastructure, and team operations Ensuring business continuity and operational excellence</p>
<p>The ideal candidate will have 8+ years of engineering experience, with at least 4+ years in people management, including 2+ years managing managers or leading multiple teams. They will also have expertise in Enterprise PLMs, including architecture, customization, and integrations, as well as broad understanding of enterprise systems landscape.</p>
<p>In addition to the required qualifications, preferred qualifications include experience in defense, aerospace, or highly regulated manufacturing environments at scale, background implementing Teamcenter at a company during hypergrowth, prior experience in startup or high-growth technology companies, and track record of successful M&amp;A integrations or large-scale system migration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Enterprise PLMs, Teamcenter, Architecture, Customization, Integrations, Cloud infrastructure, High-availability systems, Enterprise-scale deployments, People management, Leadership, Communication, Budget management, Defense, Aerospace, Regulated manufacturing, Teamcenter implementation, Startup growth, M&amp;A integrations, System migration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5067990007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>692cbf8b-44c</externalid>
      <Title>Director of Engineering (Service Foundations)</Title>
      <Description><![CDATA[<p>As a Director of Engineering (Service Foundations) at Databricks, you will lead critical infrastructure initiatives that build and operate distributed systems, driving company-wide efficiency, reliability, and automation. You will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Solving real business needs at a large scale by applying your software engineering skills</li>
<li>Ensuring consistent delivery against milestones and strong alignment with the field working &quot;two-in-a-box&quot; with product leadership</li>
<li>Evolving organisational structure to align with long-term initiatives, building strong &quot;5 ingredient&quot; teams with good comms architecture</li>
<li>Managing technical debt, including long-term technical architecture decisions and balancing product roadmap</li>
<li>Leading and participating in technical, product, and design discussions</li>
<li>Building, managing, and operating a highly scalable service in the cloud</li>
<li>Growing leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partnering with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Creating a culture of excellence on the team while leading with empathy</li>
</ul>
<p>We are looking for someone with 20+ years of industry experience building and operating large-scale distributed systems. You should have proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads. Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions, is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, distributed systems, cloud infrastructure, technical debt management, team leadership, coaching and mentorship, product management, field engineering, recruiting, HR</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8290839002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0c258f-1f6</externalid>
      <Title>Engineering Manager II, Enterprise AI Solutions</Title>
      <Description><![CDATA[<p>We are seeking a Business Savvy Engineering Manager to help define the future of Corporate IT&#39;s AI-based future at Pinterest. Working closely with cross-functional engineering teams and business leaders, you will lead a nimble team playing a pivotal role in scaling Corporate IT&#39;s engineering department.</p>
<p>As an Engineering Manager, you will guide your team in designing and building the solutions that make our business partners&#39; jobs easier, faster, and more capable. You will grow and empower engineers while shaping how we build Pinterest&#39;s AI future.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead a team of employees and contractors focused on solving business problems using AI tools.</li>
<li>Work closely with the existing software engineering teams to develop a seamless and low-friction client experience.</li>
<li>Mentor junior engineers to help them grow and develop into the best that they can be.</li>
<li>Motivate and lead your team to show up every day and do their best work.</li>
<li>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience leading and growing engineering teams, with a strong hands-on background in Python.</li>
<li>7+ years of industry experience designing, building, and operating scalable, highly available backend systems, including owning production-grade infrastructure at scale.</li>
<li>Proficiency in designing and delivering AI-based solutions that solve real-world business problems.</li>
<li>Understanding of business unit challenges and problems, focused on Finance, Accounting, Legal, Sales, and Marketing.</li>
<li>Experience with cloud infrastructure on AWS and containerized services using Docker and Kubernetes.</li>
<li>Demonstrated technical leadership and people management experience, including setting team vision and long-term roadmap, mentoring and growing engineers across all levels, driving day-to-day execution and engineering alignment, and partnering cross-functionally to deliver complex, high-impact platform investments.</li>
<li>Demonstrated ability to use AI to accelerate team execution, system design, and decision-making, paired with sound judgment in validating outputs, maintaining quality, and taking ownership of final outcomes.</li>
<li>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</li>
</ul>
<p>In-Office Requirement Statement:</p>
<ul>
<li>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</li>
<li>This role will need to be in the office for in-person collaboration 1-2 times/quarter, and therefore can be situated anywhere in the country.</li>
</ul>
<p>Relocation Statement:</p>
<ul>
<li>This position is not eligible for relocation assistance.</li>
</ul>
<p>At Pinterest, we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Python, AI, Cloud infrastructure, Containerized services, Docker, Kubernetes, Data lake storage, Metadata management, Big data, ML/AI innovations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494960</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b71a8e89-5f0</externalid>
      <Title>Multinational Digital Infrastructure - Senior Cloud Engineer</Title>
      <Description><![CDATA[<p>Anduril Industries is seeking a Senior Cloud Engineer to join its Multinational Digital Infrastructure team. As a Senior Cloud Engineer, you will design and implement cloud environments that enable Anduril to effectively operate sovereign programmes in the U.K. and Australia, as well as expanding to other nations as Anduril&#39;s global presence increases.</p>
<p>You will work across engineering, security, and product teams to ensure our digital infrastructure is secure, scalable, and ready to support emerging mission demands.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, deploy, and maintain enterprise cloud landing zones, security and infrastructure tooling.</li>
<li>Collaborate with teams across the U.S. and Australia to enable secure connectivity between other sovereign cloud environments.</li>
<li>Partner with government customers, authorizing officials (AOs), cybersecurity teams, and policy shops to accelerate accreditation, break through legacy barriers, and unlock access for cross-nation engineering teams.</li>
<li>Implement infrastructure automation (IaC), observability tooling, and secure configuration baselines to support scalable, repeatable environment builds.</li>
<li>Work closely with product, autonomy, Lattice, and Maritime engineering teams to integrate infrastructure capabilities with platform development, testing, and deployment workflows.</li>
<li>Act as a technical leader during environment standup, troubleshooting, and validation events; ensure classified systems perform reliably in support of mission-critical needs.</li>
<li>Support development of next-generation secure architectures for multinational development, data sharing, and mission system integration across Maritime platforms.</li>
<li>Serve as a technical representative during customer events, exercises, and operational demonstrations to ensure infrastructure readiness and mission success.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Ability to obtain and maintain a UK security clearance to SC level.</li>
<li>Bachelor&#39;s degree in a STEM field or equivalent engineering experience.</li>
<li>Technical depth in one or more areas, including cloud infrastructure, secure networking, systems engineering, DevSecOps, platform architecture, cybersecurity, identity &amp; access management.</li>
<li>Specific technology includes: cloud - AWS, Azure; infrastructure as code - Terraform, CloudFormation; SCM - GitHub Enterprise; CI/CD - CircleCI, Gitlab; IDAM + SSO - Okta, AWS Identity Center.</li>
<li>8+ years of relevant engineering, infrastructure, or technical program execution experience.</li>
<li>Willingness to travel domestically and internationally as required.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with secure systems engineering, ideally within UK Government or Defence.</li>
<li>Experience provisioning large enterprise cloud platforms for hundreds or thousands of users.</li>
<li>Experience designing or maintaining distributed systems, secure networks, or infrastructure supporting autonomy, AI/ML, or big data workloads.</li>
<li>Demonstrated ability to work across technical disciplines, influence without authority, and operate in ambiguous and fast-paced environments.</li>
<li>Experience working with international partners or navigating multi-nation technical or policy workflows.</li>
</ul>
<p>The salary range for this role is competitive and includes highly competitive equity grants as part of Anduril&#39;s total compensation package.</p>
<p>Additional benefits include:</p>
<ul>
<li>Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>Generous time off, including a holiday hiatus in December.</li>
<li>Family planning &amp; parenting support, including coverage for fertility treatments and adoption.</li>
<li>Mental health resources, including access to free therapy and life coaching.</li>
<li>Professional development opportunities, including annual reimbursement for professional development.</li>
<li>Commuter benefits and relocation assistance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure, Secure networking, Systems engineering, DevSecOps, Platform architecture, Cybersecurity, Identity &amp; access management, AWS, Azure, Terraform, CloudFormation, GitHub Enterprise, CircleCI, Gitlab, Okta, AWS Identity Center, Secure systems engineering, Provisioning large enterprise cloud platforms, Designing or maintaining distributed systems, Infrastructure supporting autonomy, AI/ML, or big data workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5039728007</Applyto>
      <Location>London, England, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a14533c3-732</externalid>
      <Title>Senior Engineer, Cilium CNI &amp; Cloud Networking</Title>
      <Description><![CDATA[<p>Network Services Team</p>
<p>The Network Services team builds and operates the foundational networking that powers CoreWeave&#39;s Kubernetes platforms at cloud scale. The team is responsible for container networking, connectivity, and network services that support large-scale, GPU-driven workloads across regions and environments. They focus on scalability, reliability, security, and performance while delivering intuitive platforms for internal teams and customers.</p>
<p>About the Role</p>
<p>As a Senior Engineer focused on our Cilium-based CNI, you will design, build, and operate the container networking layer that underpins CoreWeave&#39;s Kubernetes platforms. Day to day, you will work on evolving our CNI stack to support large, high-density GPU clusters with demanding throughput and latency requirements. You will partner closely with Kubernetes, Infrastructure, and Network Services engineers to ensure the platform is highly available, observable, and secure. This role spans architecture, implementation, and operations, with ownership from prototype through production. You will also help shape how our networking platform scales for future growth.</p>
<p>Who You Are</p>
<ul>
<li>5+ years of experience as a Software Engineer or Systems Engineer working on cloud infrastructure or large-scale distributed systems.</li>
<li>Hands-on production experience with Cilium CNI (or equivalent advanced CNIs), including cluster configuration and lifecycle management.</li>
<li>Strong understanding of Cilium&#39;s eBPF datapath, policy model, and load-balancing mechanisms.</li>
<li>Deep knowledge of cloud networking concepts, including VPCs, subnets, routing, security groups/ACLs, NAT, and ingress/egress architectures.</li>
<li>Experience designing multi-tenant network architectures with strong isolation and security.</li>
<li>Solid grounding in TCP/IP, dynamic routing (e.g., BGP), ECMP, MTU/fragmentation, and overlay/underlay networking (VXLAN, Geneve, encapsulation).</li>
<li>Experience with network observability and troubleshooting across L3–L7.</li>
<li>Proficiency in at least one systems language such as Golang or C/C++.</li>
<li>Experience working in modern CI/CD environments.</li>
<li>Experience operating Kubernetes at scale, including cluster lifecycle management and debugging networking issues across pods, nodes, and external services.</li>
<li>Demonstrated ownership of complex systems end-to-end.</li>
</ul>
<p>Preferred</p>
<ul>
<li>Experience operating cloud-scale network services across tens of thousands of nodes and multiple regions.</li>
<li>Contributions to Cilium, Kubernetes, or related open-source networking projects.</li>
<li>Experience with eBPF development and performance tuning.</li>
<li>Experience building Kubernetes operators or controllers.</li>
<li>Familiarity with service meshes, multi-cluster networking, or cluster mesh solutions.</li>
<li>Experience in GPU-heavy, HPC, or other performance-sensitive environments.</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people and value candidates who bring diverse experiences , even if you’re not a 100% match on paper. If some of this sounds like you, we’d love to talk.</p>
<ul>
<li>You love solving complex distributed systems and networking challenges at scale.</li>
<li>You’re curious about cloud-native networking, eBPF, and Kubernetes internals.</li>
<li>You’re an expert in building reliable, scalable infrastructure that runs in production.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Cilium CNI, cloud infrastructure, large-scale distributed systems, container networking, connectivity, network services, Kubernetes, eBPF datapath, policy model, load-balancing mechanisms, cloud networking concepts, VPCs, subnets, routing, security groups/ACLs, NAT, ingress/egress architectures, TCP/IP, dynamic routing, ECMP, MTU/fragmentation, overlay/underlay networking, Golang, C/C++, CI/CD environments, Kubernetes at scale, cluster lifecycle management, debugging networking issues, cloud-scale network services, Cilium, eBPF development, performance tuning, Kubernetes operators, controllers, service meshes, multi-cluster networking, cluster mesh solutions, GPU-heavy, HPC, performance-sensitive environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653971006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>49c3bdc7-2c4</externalid>
      <Title>Oracle Fusion Software Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes.</p>
<p>You will be tasked with stabilizing, optimizing, and making them exceptionally robust. Long-term, you will be the subject matter expert responsible for architecting and scaling our enterprise integration landscape. This is a high-impact role for someone who thrives on solving complex data challenges and wants to build the operational foundation that enables Anduril to scale its mission.</p>
<p>Stabilize &amp; Optimize: Dive deep into existing Oracle Fusion ERP integrations across manufacturing, supply chain, finance, and engineering systems. Diagnose root causes of instability, re-architect weak points, and implement robust error handling and monitoring to achieve mission-critical reliability.</p>
<p>Architect &amp; Build: Design and develop new, scalable enterprise integrations using Oracle Integration Cloud (OIC). Translate complex business requirements for product data, multi-level Bills of Material (BOMs), procurement, inventory, work orders, and financial transactions into resilient and efficient integration flows.</p>
<p>Own the Integration Lifecycle: Manage the end-to-end process from design and development through testing (unit, SIT, UAT) and deployment, utilizing CI/CD best practices. Proactively tune and maintain integrations to ensure peak performance as data volumes grow.</p>
<p>Ensure Data Integrity: Become the trusted expert on data transformation and mapping between systems. Implement rigorous validation and reconciliation logic to guarantee that our enterprise data is flawless across all systems.</p>
<p>Collaborate &amp; Influence: Act as the key technical partner to our ERP, Manufacturing, Supply Chain, and Finance teams. Clearly articulate technical designs, trade-offs, and progress to both engineering peers and business stakeholders, guiding them toward best-practice integration patterns.</p>
<p>Leverage Modern Oracle Cloud Tools: Utilize Oracle Visual Builder Cloud Service (VBCS) where appropriate to build lightweight user interfaces that enhance integration workflows, data validation, or operational dashboards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud (OIC), Oracle Fusion ERP, Application and Tech Adapters (REST, SOAP, File, FTP, Oracle SaaS, Database), Connections, Mappings, Lookups, Error Handling, JavaScript, XSLT, XPath, SQL, Relational database concepts, Oracle Visual Builder Cloud Service (VBCS), Oracle Business Intelligence Cloud Connector (BICC), Oracle Cloud Infrastructure (OCI) services, PLM systems (e.g., Teamcenter, Windchill, Arena), Git-based source control, CI/CD pipelines, Discrete manufacturing environment, Python, Groovy, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5061437007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d3e5c3d-569</externalid>
      <Title>Senior Engineer, Datacenter Server Lifecycle</Title>
      <Description><![CDATA[<p>As a Senior Engineer on the Datacenter Machine Lifecycle team, you will own the end-to-end operational journey of every machine in our facility , from initial provisioning and deployment, across its working life, through maintenance and refresh, and all the way to decommissioning.</p>
<p>This is greenfield work: you will help define the processes, tooling, and operational standards that govern how we run and retire hardware at scale.</p>
<p>A distinguishing aspect of this role is its deep intersection with security. The machines in our datacenter handle some of the most sensitive workloads in AI , training frontier models and serving millions of users interacting with Claude.</p>
<p>Ensuring that every machine in the fleet is trusted, attested, and operating with a verified chain of integrity from the hardware up is a core part of the job, not an afterthought.</p>
<p>You will partner closely with our Infrastructure Security team to define and enforce trusted compute standards across the lifecycle, from secure provisioning through end-of-life handling.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the build-out of automation to support datacenters containing tens of thousands of servers.</li>
</ul>
<ul>
<li>Own and define the end-to-end machine lifecycle strategy , from provisioning and deployment through operation, maintenance, refresh, and decommissioning , and maintain automation and operational procedures for common lifecycle events (e.g. hardware failures, firmware upgrades, fleet rotations).</li>
</ul>
<ul>
<li>Partner closely with Infrastructure Security to design and enforce trusted compute standards across the machine lifecycle.</li>
</ul>
<ul>
<li>Work closely with our Networking team to ensure end-to-end connectivity across all sites.</li>
</ul>
<ul>
<li>Build and maintain tooling to track machine health, configuration, and operational status across the full datacenter fleet.</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 5+ years of experience in datacenter operations, hardware infrastructure management, or a closely related discipline.</li>
</ul>
<ul>
<li>Have deep, hands-on experience with server hardware , including rack deployment, cabling, troubleshooting, and understanding failure modes at scale.</li>
</ul>
<ul>
<li>Understand hardware lifecycle management end-to-end: asset tracking, provisioning workflows, maintenance scheduling, and decommissioning practices.</li>
</ul>
<ul>
<li>Have strong proficiency in at least one programming language (e.g., Python, Rust, Go, or Java).</li>
</ul>
<ul>
<li>Are comfortable navigating ambiguity and working independently to drive progress on complex, cross-functional problems.</li>
</ul>
<ul>
<li>Communicate clearly and can build consensus with a wide range of stakeholders.</li>
</ul>
<ul>
<li>Have working knowledge of modern cloud infrastructure, including Kubernetes, Infrastructure as Code, AWS, and GCP.</li>
</ul>
<ul>
<li>Are comfortable with occasional travel to datacenter sites across North America.</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Hands-on experience with GPU or AI accelerator hardware (e.g. NVIDIA A100/H100, AMD MI300, Google TPUs, or AWS Trainium) and an understanding of their operational demands.</li>
</ul>
<ul>
<li>Familiarity with modern provisioning tooling such as coreboot, LinuxBoot, or u-root.</li>
</ul>
<ul>
<li>Experience building or contributing to datacenter automation or fleet management platforms.</li>
</ul>
<ul>
<li>Experience building and deploying server operating system distributions across thousands of hosts.</li>
</ul>
<ul>
<li>A background in large-scale capacity planning and hardware refresh strategy, ideally at a hyperscaler or large cloud provider.</li>
</ul>
<ul>
<li>Experience with trusted compute and hardware security concepts such as secure boot, TPM, hardware attestation, and firmware verification , or a strong desire to develop deep expertise in this area.</li>
</ul>
<p>The annual compensation range for this role is £255,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£255,000-£325,000 GBP</Salaryrange>
      <Skills>datacenter operations, hardware infrastructure management, server hardware, programming language, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, GPU or AI accelerator hardware, modern provisioning tooling, datacenter automation, fleet management platforms, trusted compute and hardware security concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5131038008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cfd7135-0bb</externalid>
      <Title>OIC Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes.</p>
<p>Stabilize &amp; Optimize: Dive deep into existing Oracle Fusion ERP integrations across manufacturing, supply chain, finance, and engineering systems. Diagnose root causes of instability, re-architect weak points, and implement robust error handling and monitoring to achieve mission-critical reliability.</p>
<p>Architect &amp; Build: Design and develop new, scalable enterprise integrations using Oracle Integration Cloud (OIC). Translate complex business requirements for product data, multi-level Bills of Material (BOMs), procurement, inventory, work orders, and financial transactions into resilient and efficient integration flows.</p>
<p>Own the Integration Lifecycle: Manage the end-to-end process from design and development through testing (unit, SIT, UAT) and deployment, utilizing CI/CD best practices. Proactively tune and maintain integrations to ensure peak performance as data volumes grow.</p>
<p>Ensure Data Integrity: Become the trusted expert on data transformation and mapping between systems. Implement rigorous validation and reconciliation logic to guarantee that our enterprise data is flawless across all systems.</p>
<p>Collaborate &amp; Influence: Act as the key technical partner to our ERP, Manufacturing, Supply Chain, and Finance teams. Clearly articulate technical designs, trade-offs, and progress to both engineering peers and business stakeholders, guiding them toward best-practice integration patterns.</p>
<p>Leverage Modern Oracle Cloud Tools: Utilize Oracle Visual Builder Cloud Service (VBCS) where appropriate to build lightweight user interfaces that enhance integration workflows, data validation, or operational dashboards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud (OIC), Oracle Fusion ERP, Application and Tech Adapters (REST, SOAP, File, FTP, Oracle SaaS, Database), Connections, Mappings, Lookups, Error Handling, JavaScript, XSLT, XPath, SQL, Oracle Visual Builder Cloud Service (VBCS), Oracle Business Intelligence Cloud Connector (BICC), Oracle Cloud Infrastructure (OCI) services, PLM systems (e.g., Teamcenter, Windchill, Arena)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5058273007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1d2b108-713</externalid>
      <Title>Oracle Fusion Software Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes.</p>
<p>You will be tasked with stabilizing, optimizing, and making them exceptionally robust. Long-term, you will be the subject matter expert responsible for architecting and scaling our enterprise integration landscape. This is a high-impact role for someone who thrives on solving complex data challenges and wants to build the operational foundation that enables Anduril to scale its mission.</p>
<p>The successful candidate will have 5+ years of hands-on experience developing complex integrations with deep specialization in Oracle Integration Cloud (OIC), specifically Oracle Integration 3. They will have proven experience integrating Oracle Fusion Cloud ERP with upstream and downstream enterprise systems, including deep familiarity with ERP data objects such as Items, BOMs, Suppliers, Purchase Orders, Work Orders, Inventory Transactions, and Financial data.</p>
<p>Key responsibilities will include stabilizing and optimizing existing Oracle Fusion ERP integrations, architecting and building new enterprise integrations using Oracle Integration Cloud, owning the integration lifecycle, ensuring data integrity, collaborating and influencing with cross-functional teams, and leveraging modern Oracle Cloud tools.</p>
<p>The ideal candidate will have excellent SQL skills, a strong command of XSLT, XPath, and complex data mapping, demonstrable experience building, securing, and consuming RESTful APIs and SOAP web services, and experience with Oracle Fusion ERP modules such as SCM (Supply Chain Management), Manufacturing, Procurement, or Financials.</p>
<p>A tenacious problem-solver with a track record of troubleshooting, debugging, and stabilizing complex, business-critical systems, the successful candidate will be highly motivated, with a passion for delivering high-quality solutions and a commitment to continuous learning and improvement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud, Oracle Fusion Cloud ERP, XSLT, XPath, RESTful APIs, SOAP web services, SQL, Oracle Fusion ERP modules (SCM, Manufacturing, Procurement, or Financials), Oracle Visual Builder Cloud Service, Oracle Business Intelligence Cloud Connector, Oracle Cloud Infrastructure services (Functions, API Gateway, Object Storage, Logging, Autonomous Database), PLM systems (Teamcenter, Windchill, Arena), Git-based source control and CI/CD pipelines, Discrete manufacturing environment, Other programming languages (Python, Groovy, Java)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology. It brings the expertise, technology, and business model of the 21st century&apos;s most innovative companies to the defence industry.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5058269007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>67759024-e54</externalid>
      <Title>Technical Solutions Manager</Title>
      <Description><![CDATA[<p>The Customer Experience (CX) Organisation at CoreWeave is dedicated to ensuring every client running AI workloads at scale has a seamless, reliable, and high-performance experience.</p>
<p>This team supports the infrastructure that powers the AI revolution,working across data centres, hardware systems, and customer workloads to maintain the integrity of our cloud platform. The CX organisation aligns closely with the internal and customer engineering teams, offering valuable insights from the field and having the chance to contribute to the CoreWeave product roadmap and development.</p>
<p>We are seeking a remarkable Technical Solutions Manager who shares our passion and has a deep understanding of GPU infrastructure &amp; AI applications to join our CX Organisation. The team is responsible for educating prospective customers on the technical value of CoreWeave, designing and defining customer deliverables and integration points, onboarding and enabling customers, and ensuring the successful ongoing operations of CoreWeave within customer environments.</p>
<p>In this role, you will:</p>
<ul>
<li>Lead Strategic Customer Relationships: Ownership of technical customer relationships to ensure successful adoption and customer satisfaction.</li>
<li>Define Customer Requirements: Collaborate with customers and partners to define technical requirements that meet the customer&#39;s needs for AI/ML.</li>
<li>Drive End-to-End Program Execution: Oversee the execution of complex programs, including planning, resource management, risk assessment, and internal/external stakeholder engagement to ensure successful outcomes.</li>
<li>Engage with Stakeholders/Influence product strategy: Gather, document, and communicate program requirements to ensure clarity, feasibility, and alignment with critical objectives. Share customer feedback with Product Management and Engineering, influencing product direction.</li>
<li>Foster Collaboration: Facilitate effective communication among various teams, including engineering, product management, operations, support, and sales.</li>
<li>Build Strong Relationships: Establish and maintain strong relationships with stakeholders to align program objectives and secure necessary resources and support.</li>
<li>Proactively Manage Risks: Identify potential risks and issues throughout the program and proactively communicate to relevant stakeholders to drive resolutions and minimise impact.</li>
<li>Measure Success: Define and track key performance indicators (KPIs) and metrics to measure program success and effectiveness.</li>
<li>Drive Improvements: Identify and address inefficiencies to enhance operational speed and quality outcomes.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>B.S. in Computer Science or a related technical discipline, or equivalent experience</li>
<li>5+ years of experience in technical program management, customer success management, or professional services delivery management, with a focus on cloud infrastructure and AI/ML applications</li>
<li>Strong communication skills through both long-form documents and short-form/asynchronous communications with internal and external stakeholders</li>
<li>Proven track record of successfully organising and coordinating the efforts of multiple teams to deliver long-running, complex projects with visibility to senior stakeholders.</li>
<li>Experience with multiple staples of leadership, with the ability to work in a bottom-up leadership-style organisation that focuses on enablement, communication, organisation, and inspiration over task management.</li>
<li>Demonstrated experience with proactive self-management, with examples of recognising when to seek help and is willing to ask for it in a timely manner within a safe environment.</li>
<li>Experience with client management within a cloud infrastructure landscape, ideally with an understanding of the fundamentals of Kubernetes, GPU compute, AI/ML, and high-performance computing</li>
</ul>
<p>Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organisation are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $185,000 to $215,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits programme (all based on eligibility).</p>
<p>What We Offer The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Programme (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data centre locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000 to $215,000</Salaryrange>
      <Skills>Cloud infrastructure, AI and machine learning, GPU infrastructure, Kubernetes, GPU compute, High-performance computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for AI and machine learning workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4380852006</Applyto>
      <Location>Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>782a1c68-325</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior DevOps Engineer to join our Infrastructure Engineering group. As a Senior DevOps Engineer, you will be responsible for innovation in infrastructure and automation for ZoomInfo Engineering. You will have a strong background in modern infrastructure, with a thorough understanding of industry best practices. You will have a high level of comfort participating in challenging technical discussions and advocating for best practices in a high-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Thorough, clear, concise documentation of new and existing standards, procedures, and automated workflows</li>
<li>Championing of best practices and standards around infrastructure configuration and management</li>
<li>Experience in creating internal products and managing their software development lifecycle</li>
<li>Deployment, configuration, and management of infrastructure via infrastructure as code</li>
<li>Working hands on with cloud infrastructure (AWS, Azure, and GCP)</li>
<li>Working hands on with container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE, etc.)</li>
<li>Configuration and management of Linux based tools and third-party cloud services</li>
<li>Continuous improvement of our infrastructure, ensuring that it is highly available and observable</li>
</ul>
<p>Minimum Requirements:</p>
<ul>
<li>Solid foundation of experience managing Linux systems in virtual environments (6+ years)</li>
<li>Deploying and maintaining highly available infrastructure in one or more Cloud providers (5+ years, AWS or GCP preferred)</li>
<li>Infrastructure as code using Terraform (4+ years)</li>
<li>Creating, deploying, maintaining, and troubleshooting Docker images (4+ years)</li>
<li>Scoping, deploying, maintaining and troubleshooting Kubernetes clusters (4+ years)</li>
<li>Developing and maintaining an active codebase in Go, Python preferably (3+ years)</li>
<li>Experience with PaaS technologies (5+ years, EKS and GKE preferred)</li>
<li>Maintaining monitoring and observability tools (Datadog, Prometheus preferred)</li>
<li>Thorough understanding of network infrastructure and concepts (VPNs, routers and routing protocols, TCP/IP, IPv4 and v6, UDP, OSI layers, etc.)</li>
<li>Experience with load balancing and proxy technologies (Istio, Nginx, HAProxy, Apache, Cloud load balancers, etc.)</li>
<li>Debugging and troubleshooting complex problems in cloud-native infrastructure.</li>
<li>Slack native mentality.</li>
<li>Bachelor’s Degree in Computer Science or a related technical discipline, or the equivalent combination of education, technical certifications, training, or work experience.</li>
</ul>
<p>Abilities Required:</p>
<ul>
<li>Demonstrated ability to learn new technologies quickly and independently</li>
<li>Strong technical, organizational and interpersonal skills</li>
<li>Strong written and verbal communication skills</li>
<li>Must be able to read, understand, and communicate complex problems and solutions in English over a textual medium (such as Slack)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux, Cloud infrastructure (AWS, Azure, GCP), Container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE), Infrastructure as code (Terraform), Go, Python, PaaS technologies (EKS, GKE), Monitoring and observability tools (Datadog, Prometheus)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a technology company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8287254002</Applyto>
      <Location>Ra&apos;anana, Israel</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>83fb6b32-83e</externalid>
      <Title>Senior OCI and Fusion Administrator</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>This role is responsible for the technical administration, environment management, and ongoing platform integrity of the Oracle Fusion ERP Cloud environment, operating as a pure technical administrator for Oracle Fusion Applications and the underlying Cloud Infrastructure.</p>
<p>Key Responsibilities</p>
<ul>
<li>Environment Management &amp; Maintenance: Own the technical management of all Fusion environments, including executing scheduled environment refreshes, cloning instances, managing environment usage, and ensuring system configuration baselines.</li>
</ul>
<ul>
<li>Cloud Update Execution: Participate in the technical preparation and execution of Oracle’s mandatory quarterly cloud updates, including performing pre-update checks and technical smoke testing post-update.</li>
</ul>
<ul>
<li>Platform Stability &amp; Governance: Own the non-functional requirements for the Oracle Cloud environment, including security architecture, role design governance, and performance benchmarking. Enforce technical configuration control standards.</li>
</ul>
<ul>
<li>Security Administration: Provide security administration and support for the all Oracle Fusion Applications, PaaS and DBaaS platforms, focusing on security key management, monitoring dashboards, and assisting with artifact deployment.</li>
</ul>
<ul>
<li>Risk Management Cloud: Own the Oracle Fusion Risk Management Cloud service as the technical owner and provide support to the Compliance business teams</li>
</ul>
<ul>
<li>Technical Support &amp; Troubleshooting: Provide Level 2/3 technical support for environment-related issues, access problems, and deployment failures. Serve as an escalation point to conduct root cause analysis for platform-level incidents.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>5+ years focusing on the technical administration/support within an Oracle Fusion environment.</li>
</ul>
<ul>
<li>Expert-level knowledge of managing Oracle Fusion Cloud environments, including environment refresh and cloning processes.</li>
</ul>
<ul>
<li>Deep technical familiarity with Oracle Cloud Infrastructure administration and monitoring.</li>
</ul>
<ul>
<li>Strong understanding of security architecture, Oracle Fusion Risk Management Cloud, role design governance, and performance management within a cloud ERP.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Experience in an organization transitioning to Oracle Cloud Ecosystem.</li>
</ul>
<ul>
<li>Hands-on experience with various OCI PaaS Toolsets.</li>
</ul>
<ul>
<li>Exposure to the Data Center Infrastructure industry</li>
</ul>
<ul>
<li>Relevant professional product/functional certifications (e.g., Oracle Cloud Infrastructure and Security certifications)</li>
</ul>
<ul>
<li>Skilled in administering DevOps tools like Flexagon FlexDeploy and using Opal IGA tool</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Oracle Fusion Cloud environments, environment refresh and cloning processes, Oracle Cloud Infrastructure administration, security architecture, role design governance, performance management within a cloud ERP, Flexagon FlexDeploy, Opal IGA tool, DevOps tools, OCI PaaS Toolsets, Data Center Infrastructure industry</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7609741</Applyto>
      <Location>In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2075095a-d93</externalid>
      <Title>Senior Software Engineer, BizTech(AI Products)</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Senior Software Engineer, AI Products (India)</p>
<p><strong>Company Overview</strong></p>
<p>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</p>
<p><strong>The Community You Will Join</strong></p>
<p>The Airfam Products team exists to make every Airbnb employee more productive through a unified digital headquarters experience. As part of a 13-person cross-functional team of engineers, designers, researchers, and product managers, you&#39;ll work on platforms that serve Airbnb&#39;s entire global workforce. Our portfolio includes One Airbnb (the company&#39;s internal cultural hub with enterprise search, people profiles, and AI-powered chat), OneChat (Airbnb&#39;s enterprise AI assistant enabling secure LLM interactions), and a suite of tools that power how employees discover information, connect with colleagues, and get work done. You&#39;ll be joining the AI for Non-Developers workstream, focused on expanding AI productivity tools to all Airbnb employees,building OneChat Agents, deep research capabilities, artifact creation tools, and task automation that make AI accessible to everyone, regardless of technical background.</p>
<p><strong>The Difference You Will Make</strong></p>
<p>As a Senior Software Engineer on the Airfam Products team, you&#39;ll be instrumental in building Airbnb&#39;s next generation of AI-powered employee experience platforms. Your work will be a force multiplier for the entire company,every AI feature you ship, every system you architect, and every engineer you mentor will amplify productivity across Airbnb&#39;s global workforce. You will:</p>
<ul>
<li>Democratize AI by building tools that empower non-technical employees to leverage the power of LLMs</li>
<li>Drive innovation by taking AI prototypes from concept to production at scale</li>
<li>Shape the future of how Airbnb employees work, collaborate, and discover information</li>
</ul>
<p><strong>A Typical Day</strong></p>
<ul>
<li>Lead the technical design and implementation of LLM-powered features for OneChat and enterprise AI tools, including RAG pipelines, AI agents, and prompt optimization</li>
<li>Partner with product managers, designers, and cross-functional teams to translate user problems into AI-powered solutions that serve Airbnb&#39;s global workforce</li>
<li>Develop and iterate on agentic AI capabilities, including multi-step reasoning, tool use, and context-aware decision-making</li>
<li>Implement evaluation pipelines and quality systems to measure model performance, detect hallucinations, and ensure responsible AI practices</li>
<li>Own production AI systems end-to-end, including deployment strategies, monitoring, alerting, and incident response</li>
<li>Collaborate with the DevAI team on AirChat SDK integrations, MCP (Model Context Protocol) implementations, and Glean Action Packs</li>
<li>Mentor engineers (L6-L8) through design reviews, architecture discussions, and pair programming sessions</li>
<li>Stay current with the rapidly evolving GenAI landscape, evaluating new models and techniques for potential application</li>
<li>Balance hands-on technical contributions with technical leadership activities</li>
</ul>
<p><strong>Your Expertise</strong></p>
<ul>
<li>8+ years of software engineering experience, with significant focus on building production AI/ML systems</li>
<li>2+ years of hands-on experience with Large Language Models (LLMs), including fine-tuning, prompt engineering, embeddings, and retrieval-augmented generation (RAG)</li>
<li>Strong proficiency in backend technologies (TypeScript, Go, or Java)</li>
<li>Strong backend and distributed systems expertise, including API design (REST, GraphQL) and cloud infrastructure (AWS, GCP, or Azure)</li>
<li>Track record of shipping AI-powered products from prototype to production</li>
<li>Proven ability to collaborate cross-functionally and influence without authority</li>
<li>Excellent communication skills with ability to distill complex technical concepts for diverse audiences</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Master&#39;s or PhD in Computer Science, Machine Learning, or related field</li>
<li>Experience building AI agents and multi-agent systems, preferably using Claude</li>
<li>Experience building integrations using MCP</li>
<li>Experience with containerization and orchestration (Docker, Kubernetes)</li>
<li>Background in building enterprise-grade internal tools and developer productivity platforms</li>
<li>Experience with frontend technologies (React, Next.js) for full-stack AI product development</li>
<li>Contributions to open-source Gen AI/ML projects or publications at top venues</li>
</ul>
<p><strong>Your Location</strong></p>
<p>This position is based in Bangalore, India with a hybrid work arrangement. You&#39;ll collaborate with teammates across global time zones, with primary alignment to Pacific Time for key meetings.</p>
<p><strong>Our Commitment to Inclusion &amp; Belonging</strong></p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, production AI/ML systems, Large Language Models (LLMs), backend technologies (TypeScript, Go, or Java), API design (REST, GraphQL), cloud infrastructure (AWS, GCP, or Azure), master&apos;s or PhD in Computer Science, Machine Learning, or related field, experience building AI agents and multi-agent systems, experience building integrations using MCP, experience with containerization and orchestration (Docker, Kubernetes), background in building enterprise-grade internal tools and developer productivity platforms, experience with frontend technologies (React, Next.js) for full-stack AI product development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7730723</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>de8c923e-0d8</externalid>
      <Title>Security Software Engineer - Crypto Services</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Security Software Engineer with a specialization in crypto services and key management to develop novel security tooling for securing our suite of products. The ideal candidate can develop, test, and debug embedded software with mission-critical security responsibilities.</p>
<p>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems.</p>
<p>Design and develop resilient software supporting all phases of key handling on embedded systems - from key load through sanitization.</p>
<p>Develop thorough testing and qualification procedures for security-critical components.</p>
<p>Collaborate with cross-functional teams to identify specific security needs and implement solutions.</p>
<p>Conduct code reviews and ensure adherence to security best practices.</p>
<p>Stay updated on the latest security threats and technologies.</p>
<p>US Salary Range $166,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$253,000 USD</Salaryrange>
      <Skills>Golang, Rust, C/C++, Embedded HSMs and Secure Elements, CI/CD and test automation, Mobile and embedded devices, Cybersecurity principles and practices, U.S. Secret security clearance, Security frameworks and compliance standards, Mobile development, Cloud infrastructure management, US Government key handling requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril designs and manufactures advanced technologies for defence and security applications.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5002794007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>871e8461-cb8</externalid>
      <Title>AI Native Account Executive</Title>
      <Description><![CDATA[<p>Job Title: AI Native Account Executive</p>
<p>At CoreWeave, we&#39;re building the next generation public cloud for accelerated workloads, supporting cutting-edge Machine Learning and Batch Processing use cases. As an Account Executive, you will own the full sales cycle from prospecting through close and expansion. You will manage a pipeline of opportunities, forecast revenue accurately, and consistently meet or exceed quota targets.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Engage with both technical and business stakeholders to communicate CoreWeave&#39;s value proposition, tailoring solutions to customer needs</li>
<li>Collaborate cross-functionally to ensure customer success and identify growth opportunities across accounts</li>
<li>Manage a pipeline of opportunities, forecast revenue accurately, and consistently meet or exceed quota targets</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in B2B sales and/or account management</li>
<li>Proven track record of consistently exceeding quota targets</li>
<li>Experience managing and forecasting a sales pipeline using Salesforce.com</li>
<li>Ability to communicate complex technical concepts (e.g., cloud infrastructure, ML workloads) to both technical and business audiences</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience selling cloud, infrastructure, or AI/ML-related solutions</li>
<li>Familiarity with competitive cloud environments and positioning differentiated offerings</li>
</ul>
<p>Why CoreWeave?</p>
<ul>
<li>We work hard, have fun, and move fast!</li>
<li>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on</li>
<li>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning</li>
</ul>
<p>Total Rewards Package:</p>
<ul>
<li>Base salary range: $165,000 to $200,000</li>
<li>Uncapped commissions and On Target Earnings (OTE) of $330,000–$400,000</li>
<li>Comprehensive benefits program, including medical, dental, and vision insurance, 401(k) with a generous employer match, and flexible PTO</li>
</ul>
<p>What We Offer:</p>
<ul>
<li>A competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>A dynamic and supportive work environment</li>
</ul>
<p>If you&#39;re a motivated and results-driven individual who is passionate about sales and customer success, we encourage you to apply for this exciting opportunity!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $200,000</Salaryrange>
      <Skills>sales, account management, cloud infrastructure, machine learning, batch processing, customer success, pipeline management, forecasting, quota targets, cloud sales, infrastructure sales, AI/ML sales, competitive cloud environments, differentiated offerings</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647796006</Applyto>
      <Location>San Francisco, CA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>adc2d7da-df2</externalid>
      <Title>Software Engineer III - Python</Title>
      <Description><![CDATA[<p>We are looking for an experienced Software Engineer with a strong focus on backend development to join the Chorus Platform team.</p>
<p>As a Software Engineer III - Python, you will design and develop complex, large-scale distributed systems that handle millions of customer requests daily. You will take ownership of customer-facing features and continuously deliver improvements that empower users with actionable insights from our platform.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and Develop: Build and deploy complex, large-scale distributed systems that handle millions of customer requests daily.</li>
<li>Customer-Facing Innovation: Take ownership of customer-facing features and continuously deliver improvements that empower users with actionable insights from our platform.</li>
<li>3rd Party Integrations: Develop and integrate with external conferencing and communication platforms using various SDKs and APIs.</li>
<li>Collaboration: Work closely with cross-functional teams including product managers, data scientists, and front-end engineers to deliver a seamless user experience.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Backend Development: 5+ years of experience in software development, with expertise in Python (preferred), NodeJS, Java, or Scala.</li>
<li>Distributed Systems: Proven experience in designing and developing distributed microservices and large-scale systems.</li>
<li>API Development: Strong understanding and hands-on experience with RESTful API standards.</li>
<li>Cloud Infrastructure: Experience working with cloud-based platforms, ensuring performance, scalability, and security of services.</li>
<li>Database Management: Familiarity with both NoSQL and SQL databases, optimizing for performance and scalability.</li>
<li>Communication &amp; Leadership: High interpersonal skills, with the ability to communicate technical ideas clearly and mentor junior engineers.</li>
<li>Adaptability: A willingness to work with a variety of technologies and to take ownership of different parts of the product.</li>
</ul>
<p>Education: BSc in Computer Science, Mathematics, Software Engineering, or equivalent professional experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Backend Development, Distributed Systems, API Development, Cloud Infrastructure, Database Management, NodeJS, Java, Scala, RESTful API standards, NoSQL databases, SQL databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8477031002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8ecd11de-36b</externalid>
      <Title>Senior Territory Account Executive, AI / Developer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Territory Account Executive to join our team. As a Senior Territory Account Executive, you will be responsible for developing and executing a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</p>
<p>This role targets companies with up to 2,500 employees or $1 billion in revenue. You will work a set of target accounts in Cloudflare&#39;s Developer Platform and or the Commercial sub-segments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and executing a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</li>
<li>Driving new business acquisition (new customer logos), customer expansion (upsell and cross sell Cloudflare solutions), and renewal within your territory.</li>
<li>Building a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understanding customer use-cases and how they pair with Cloudflare&#39;s portfolio solutions in order to identify new sales opportunities.</li>
<li>Crafting and communicating compelling value propositions for Cloudflare Developer Platform services (eg. performance, scalability, cost efficiency and developer productivity).</li>
<li>Driving awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Effectively scaling the territory with partners.</li>
</ul>
<p>As a trusted advisor, you will build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</p>
<p>Key requirements include:</p>
<ul>
<li>Direct B2B sales experience, adept at new business acquisition and account management.</li>
<li>Experience selling a technical, cloud-based product or service.</li>
<li>Working knowledge of the cloud infrastructure, application development and security space.</li>
<li>Solid understanding of computer networking and Internet functioning.</li>
<li>Keenness for learning technical concepts/terms.</li>
<li>Technical background in engineering, computer science, or MIS is advantageous.</li>
<li>Strong interpersonal communication skills (both verbal and written) and organizational skills.</li>
<li>Self-motivated with an entrepreneurial spirit.</li>
<li>Comfortable working in a fast-paced dynamic environment.</li>
<li>Willingness to travel frequently to visit customers and prospects.</li>
</ul>
<p>We&#39;re an equal opportunity employer and welcome applications from diverse candidates.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Direct B2B sales experience, New business acquisition and account management, Technical, cloud-based product or service, Cloud infrastructure, application development and security space, Computer networking and Internet functioning, Engineering, computer science, or MIS, Strong interpersonal communication skills, Organizational skills, Self-motivated with an entrepreneurial spirit</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that runs one of the world&apos;s largest networks, powering millions of websites and Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7405387</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5f7c499a-533</externalid>
      <Title>Senior Software Engineer, Security</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer in the Security organization at CoreWeave, you will design, build and deploy services, platforms and tools that help provide common foundational capabilities that various security programs and initiatives rely on to keep CoreWeave secure.</p>
<p>Automation to eliminate manual steps involved in understanding security risks, remediating and preventing them would be the charter. The work sits at the intersection of engineering systems and regulatory requirements, translating requirements into scalable, reliable, production grade infrastructure. Often this means building production infrastructure from scratch in many cases, and would need end to end ownership of systems including design, development, testing and deployment including implementing effective integration pipelines (CI/CD) and offering a reliable production system that should be highly available and function at scale.</p>
<p>You will partner closely with various security teams including GRC, platform engineering, and security domain teams to translate business needs into durable technical needs, while retaining full engineering ownership of how those systems are designed, built, and operated.</p>
<p>In this role, you will:</p>
<ul>
<li>Design and build scalable systems.</li>
<li>Develop control integrations and data pipelines to normalize security telemetry across IAM, logs, scanners, and CCM/GRC tools.</li>
<li>Build metrics engines, dashboards, and insights pipelines that provide real-time visibility into compliance health and emerging risks.</li>
</ul>
<p>On this team, you will:</p>
<ul>
<li>Tackle security &amp; compliance puzzles at cutting-edge scale and complexity</li>
<li>Collaborate with brilliant engineers who are redefining compliance adherence for cloud infrastructure.</li>
<li>You&#39;ll have the freedom and responsibility to innovate, experiment, and influence how we establish assurance pipelines.</li>
</ul>
<p>Investing in our people is one of our top priorities, and we value candidates who can bring their diversified experiences to our teams. Here are some qualities we’ve found compatible with our team. We&#39;d love to talk about whether this aligns with your experience and interests and what you’re excited to work on next.</p>
<p>Who You Are:</p>
<p>Minimum Qualifications</p>
<ul>
<li>A Bachelor’s degree in Information Security, Computer Science, or a related field or equivalent job experience.</li>
<li>At least 7+ years of hands-on experience in programming languages like Go.</li>
<li>At least 3+ years of hands-on experience deploying and managing Kubernetes clusters in a production environment.</li>
<li>Experience building high qps and critical distributed systems.</li>
<li>Familiarity with modern CI/CD practices and Infrastructure-as-Code tooling.</li>
<li>Proven experience building and deploying containerized applications.</li>
<li>Strong experience with technical architectures involving data flows, event driven architecture, access controls, retention, and third-party integrations.</li>
<li>Strong hands-on experience with cloud infrastructure (AWS, GCP).</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Information Security Engineering experience.</li>
<li>Expertise in major compliance and security frameworks (SOC 2, ISO 27001, PCI DSS, HIPAA, FedRAMP, NIST, CSF).</li>
<li>Background in building automation for distributed cloud environments at scale.</li>
<li>Experience with remote-access solutions like Teleport (real bonus points if you’ve submitted PRs on their product).</li>
<li>Understanding of the SSO protocols, specifically OIDC and SAML.</li>
<li>Hands-on experience with PKI and mTLS.</li>
</ul>
<p>If you&#39;re eager to elevate compliance into a creative, strategic force within a fast-paced, forward-thinking company, we&#39;d love to hear from you!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Go, Kubernetes, Cloud infrastructure, CI/CD practices, Infrastructure-as-Code tooling, Containerized applications, Technical architectures, Data flows, Event driven architecture, Access controls, Retention, Third-party integrations, Information Security Engineering, Compliance and security frameworks, Automation for distributed cloud environments, Remote-access solutions, SSO protocols, PKI and mTLS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4651859006</Applyto>
      <Location>Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cf3da788-36c</externalid>
      <Title>Senior Territory Account Executive, Poland</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About this Role</p>
<p>The Senior Territory Account Executive position effectively delivers the full sales cycle from prospecting to negotiating and closing sales with new &amp; existing customers in line with business plans. Identify and progress cross sell opportunities to maximise revenue goals. Selling new products and generating additional sales revenue through effective sales outreach activity.</p>
<p>Main Responsibilities:</p>
<ul>
<li>Develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</li>
<li>Drive new business acquisition (new customer logos), customer expansion (upsell and cross sell Cloudflare solutions), and renewal within your territory.</li>
<li>Build a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understand customer use-cases and how they pair with Cloudflare’s portfolio solutions in order to identify new sales opportunities.</li>
<li>Craft and communicate compelling value propositions for Cloudflare services. Drive awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Effectively scale the territory with partners - Accurately forecast commercial outcomes by running a consistent sales process, including driving next step expectations and contract negotiations.</li>
<li>As a trusted advisor, build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</li>
<li>Network across different business units with each of your accounts, and multi-thread to identify and engage new divisional buyers.</li>
<li>Position Cloudflare&#39;s platform in each of your target customers, including Cloudflare One and the Connectivity Cloud to realise our full potential in every customer.</li>
<li>Operate internally as a liaison with cross-functional teams to share key customer feedback and insights to improve customer experience and further investments with Cloudflare.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>B2B sales experience, Direct experience selling Enterprise Software or SaaS, Knowledge of cloud infrastructure and security space, Understanding of computer networking and Internet functioning, Keenness for learning technical concepts/terms, Strong interpersonal communication skills, Organisational skills, Self-motivation with an entrepreneurial spirit, Comfortable working in a fast-paced dynamic environment, Willingness to travel frequently to visit customers and prospects, Fluency in Polish</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides internet infrastructure and security services to customers. It operates one of the world&apos;s largest networks, powering millions of websites and other internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6417720</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bdf949b3-c66</externalid>
      <Title>Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>
<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>
<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>
<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>
<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>
<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>
<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>
<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>
<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>
<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>
<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>
<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>
<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>
<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>
<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>
<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>
<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>
<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>
<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>
<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>
<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>
<li>Advise executive leadership on security architecture, risks, and mitigation.</li>
<li>Mentor security engineers and developers on secure design and best practices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>
<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>
<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>
<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>
<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>
<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>
<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>
<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>
<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>
<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>
<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>
<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>
<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>
<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>
<li>Ability to influence stakeholders and drive alignment.</li>
<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a technology company that provides a cloud-based platform for data analytics and artificial intelligence.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8207910002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e48ec86-b97</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p><strong>Current Project Areas</strong></p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Ambiguity and iteration, Genuine curiosity, Passion for AI safety, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Secure enclaves, TPMs, Confidential computing primitives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b9b6c8a6-992</externalid>
      <Title>Senior Territory Account Executive (Oklahoma, Louisiana, Missouri or Kansas)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code.</p>
<p>As a Senior Territory Account Executive, you will work in the mid-market segment, focusing on both the acquisition of new Territory accounts and the expansion of existing customer accounts. You will develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</p>
<p>Key Responsibilities</p>
<ul>
<li>Develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</li>
<li>Drive new business acquisition (new customer logos), customer expansion (upsell and cross sell Cloudflare solutions), and renewal within your territory.</li>
<li>Engage in account mapping sessions with partners, including training new partners on our technology and GTM strategies.</li>
<li>Developing scalable relationships with target partners, to expand partner ecosystem in a specific region.</li>
<li>Build a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understand customer use-cases and how they pair with Cloudflare&#39;s portfolio solutions in order to identify new sales opportunities.</li>
<li>Craft and communicate compelling value propositions for Cloudflare services. Drive awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Accurately forecast commercial outcomes by running a consistent sales process, including driving next step expectations and contract negotiations.</li>
<li>As a trusted advisor, build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</li>
<li>Network across different business units with each of your accounts, and multi-thread to identify and engage new divisional buyers.</li>
<li>Position Cloudflare&#39;s platform in each of your target customers, including Cloudflare One and the Connectivity Cloud to realize our full potential in every customer.</li>
<li>Operate internally as a liaison with cross-functional teams to share key customer feedback and insights to improve customer experience and further investments with Cloudflare.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3+ years of direct B2B selling experience</li>
<li>Strong interpersonal communication (verbal and written) and organizational skills</li>
<li>Self-motivated; entrepreneurial spirit</li>
<li>Comfortable working in a fast-paced dynamic environment</li>
<li>Bachelor&#39;s degree required</li>
<li>Demonstrated analytical and quantitative abilities</li>
<li>Software and system skills are a must (SFDC, Tableau, G-suite, MSFT suite)</li>
</ul>
<p>Desirable Skills, Knowledge and Experience</p>
<ul>
<li>5+ years in Software/SaaS/Security Sales &amp; Channel management.</li>
<li>Existing relationships and/or strong familiarity of the partner ecosystem in the region that they cover.</li>
<li>Understanding of cloud infrastructure ecosystem and cloud security is highly preferred.</li>
<li>Experience working in a start-up environment.</li>
<li>Ability to travel up to 25% of the time.</li>
<li>Technical competence strongly preferred.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>direct B2B selling experience, interpersonal communication, organizational skills, self-motivation, entrepreneurial spirit, comfortable working in a fast-paced dynamic environment, Bachelor&apos;s degree, analytical and quantitative abilities, software and system skills (SFDC, Tableau, G-suite, MSFT suite), Software/SaaS/Security Sales &amp; Channel management, partner ecosystem, cloud infrastructure ecosystem, cloud security, start-up environment, technical competence</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that runs one of the world&apos;s largest networks, powering millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7645010</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70830759-03f</externalid>
      <Title>Senior Solutions Engineer (based in Sydney)</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Senior Solutions Engineer to work with our top-tier customers and become their Trusted Advisor for their security and business goals.</p>
<p>As a Senior Solutions Engineer, you will have a track record of working with large enterprise organisations in driving business and technical outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology stakeholders.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Landing an end-to-end value proposition for web security &amp; performance that spans the breadth of Cloudflare Product offerings.</li>
<li>Empowering customers in their security adoption journey, helping them to define a secure strategy, and architecture of necessary security controls aligned with Cloudflare Security and Performance product suites.</li>
<li>Working closely with Account Executives to educate prospective customers on how they can obtain success on the Cloudflare platform.</li>
<li>Applying technical knowledge to architect security solutions that meet business, IT, Regulation and Compliance needs, infusing key security technologies where appropriate.</li>
<li>Being a Voice of the Customer to share insights and best practices, connect with Global Engineering and Product teams at Cloudflare to remove blockers and influence the roadmap.</li>
</ul>
<p>The ideal candidate will have 7+ years of experience in a pre-sales SE role, with a proven track record of building deep technical relationships with senior security executives in large or highly strategic accounts.</p>
<p>Key skills include:</p>
<ul>
<li>Relationship Building</li>
<li>Problem Solving</li>
<li>Customer Focus</li>
<li>Value Realisation</li>
<li>Trusted Technical Advisor</li>
</ul>
<p>Technical skills include:</p>
<ul>
<li>Understanding of &#39;how the internet works&#39;</li>
<li>Experience in cloud computing, preferred if strong background in architecting applications on AWS or GCP</li>
<li>Technical expertise in cloud infrastructure, integrating front-end web technologies such as Next.JS/React, with compute and databases</li>
<li>Demonstrated experience with a scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills</li>
<li>You&#39;ve built a web application before, or contributed to an existing application in a meaningful way</li>
<li>You can describe the differences between CSRF, XSS and SQLi in detail and Cloudflare&#39;s role in defending against them</li>
<li>Understanding of, or experience with, regulatory requirements such as PCI DSS, HIPAA, and SOC-2</li>
</ul>
<p>If you&#39;re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you&#39;ll fit right in.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Relationship Building, Problem Solving, Customer Focus, Value Realisation, Trusted Technical Advisor, Understanding of &apos;how the internet works&apos;, Experience in cloud computing, Technical expertise in cloud infrastructure, Demonstrated experience with a scripting language, Built a web application before</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7794078</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>