{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/terraform"},"x-facet":{"type":"skill","slug":"terraform","display":"Terraform","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61234903-9fa"},"title":"Engineering Manager (Java or Typescript) - Guest Experience (all genders)","description":"<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>\n<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>\n<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>\n<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>\n<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>\n</ul>\n<ul>\n<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>\n</ul>\n<ul>\n<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>\n</ul>\n<ul>\n<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>\n</ul>\n<ul>\n<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>\n</ul>\n<ul>\n<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<ul>\n<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>\n</ul>\n<ul>\n<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>\n</ul>\n<ul>\n<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>\n</ul>\n<ul>\n<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>\n</ul>\n<ul>\n<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>\n</ul>\n<ul>\n<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>\n</ul>\n<ul>\n<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>\n</ul>\n<ul>\n<li>Take full responsibility for delivering impactful features to millions of users annually.</li>\n</ul>\n<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>\n</ul>\n<ul>\n<li>Experience building and implementing backend services and/or frontend applications.</li>\n</ul>\n<ul>\n<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>\n</ul>\n<ul>\n<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>\n</ul>\n<ul>\n<li>Love for building world-class products with a great user experience.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>\n</ul>\n<ul>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>\n</ul>\n<ul>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>\n</ul>\n<ul>\n<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>\n</ul>\n<ul>\n<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>\n</ul>\n<ul>\n<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>\n</ul>\n<ul>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61234903-9fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/1558189","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":"95.000-125.000€ + VSOPs based on relevant experience and seniority","x-skills-required":["Typescript","NodeJS","ReactJS","Zustand","TailwindCSS","Express","Java","Kotlin","Spring Boot","AWS","Docker","Jenkins","Git","Terraform","PostgreSQL","Redis","Elasticsearch","Redshift","ELK","Grafana","Looker","Opsgenie"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:57.912Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af8ed06d-a9a"},"title":"Forward Deployed Software Engineer - Equities Technology","description":"<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>\n<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>\n<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>\n<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>\n<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>\n<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af8ed06d-a9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953439247","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Cloud computing (AWS/GCP)","DevOps","Infrastructure-as-code (Terraform)","CI/CD pipelines","Automation scripts","ML/AI proof-of-concepts","Data structures","Algorithms","System design"],"x-skills-preferred":["Experience in the financial services or fintech space","Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex","Experience with MLOps tooling and concepts","Cloud certifications (AWS or GCP)"],"datePosted":"2026-04-18T22:14:13.794Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a75ea8b-5b4"},"title":"Application Security Engineer","description":"<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies, you will play a key role in building toolsets and processes to drive adoption of secure practices across the enterprise.</p>\n<p>The successful candidate will have a proven understanding in enterprise security and AI security and will focus on defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks</li>\n<li>Conducting specialized threat modeling, red teaming, and risk assessments for AI/ML models</li>\n<li>Leading risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects</li>\n<li>Engaging throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards</li>\n<li>Evangelizing AppSec and AI security best practices through developer education, training materials, and outreach</li>\n</ul>\n<p>Qualifications include:</p>\n<ul>\n<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field</li>\n<li>5+ years&#39; experience working as an Application Security Engineer, Software Engineer, or similar role</li>\n<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs</li>\n<li>Experience working with AI models, Agentic frameworks and security risks associated with AI</li>\n<li>Experience in working with global teams, collaborating on code and presentations</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>\n<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols</li>\n<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>\n<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions</li>\n<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>\n<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar</li>\n<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>\n<li>Familiarity with web application security testing tools and methodologies</li>\n<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>\n<li>Knowledge of Linux, OS internals and containers is a plus</li>\n<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous</li>\n</ul>\n<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a75ea8b-5b4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955629908","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI-specific risks","Generative AI","LLMs","Agentic frameworks","Security guardrails","Threat modeling","Red teaming","Risk assessments","Application risk assessments","Design reviews","Mitigation strategies","Secure coding standards","Developer education","Training materials","Outreach","Common SCM & CI/CD technologies","GitHub","Jenkins","Artifactory","Security Scanning","Vulnerability Management","Static and dynamic security analysis tools","SCA/SBOM solutions","Secrets Management & Password Vault technologies","Delinea Secret Server","Hashicorp Vault","Secure programming","Python","Java","C++","C#","Infrastructure as Code tools","CloudFormation","Terraform","Ansible","Web application security testing tools","Methodologies","Security frameworks","Standards","ISO 27001","NIST","OWASP","Linux","OS internals","Containers"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:06.620Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Developer education, Training materials, Outreach, Common SCM & CI/CD technologies, GitHub, Jenkins, Artifactory, Security Scanning, Vulnerability Management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets Management & Password Vault technologies, Delinea Secret Server, Hashicorp Vault, Secure programming, Python, Java, C++, C#, Infrastructure as Code tools, CloudFormation, Terraform, Ansible, Web application security testing tools, Methodologies, Security frameworks, Standards, ISO 27001, NIST, OWASP, Linux, OS internals, Containers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c70414d-4e6"},"title":"Full‑Stack data engineer","description":"<p>We are seeking a highly self-sufficient, motivated engineer with strong full-stack data engineering skills to join our team. This is a remote/offshore role that requires autonomy, excellent communication, and the ability to deliver high-quality work with limited supervision while collaborating with a predominantly US-based team.</p>\n<p>You will build reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting,working end-to-end from data ingestion and transformation through to UI. Our Python-based data platform is undergoing a major evolution toward a modern, cloud-native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment.</p>\n<p>This role will be pivotal in independently designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives,delivering clean, reliable, and well-modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely (primarily via remote channels) with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Remote collaboration &amp; communication: Operate effectively as an offshore member of a distributed team, proactively communicating status, risks, and blockers across time zones and coordinating overlap with US working hours as needed.</li>\n</ul>\n<ul>\n<li>Full-stack data engineering: Build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites, delivering production-grade solutions with minimal hand-holding.</li>\n</ul>\n<ul>\n<li>Autonomous delivery &amp; ownership: Take end-to-end ownership of features and projects,clarifying requirements, breaking work into milestones, estimating timelines, and delivering high-quality, well-documented solutions.</li>\n</ul>\n<ul>\n<li>Specification and design: Translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable technical specifications and design documents.</li>\n</ul>\n<ul>\n<li>Code quality: Write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.</li>\n</ul>\n<ul>\n<li>Continuous improvement: Contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements, continuously suggesting better tools, patterns, and automation.</li>\n</ul>\n<p>Required Skills &amp; Experience</p>\n<ul>\n<li>Professional experience: 5+ years in software engineering, with a full-stack background building complex, scalable data-engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.</li>\n</ul>\n<ul>\n<li>Modern data engineering: Strong experience with medallion data architecture patterns using data warehouse technologies (e.g., Snowflake), data transformation tooling (e.g., dbt), BI tooling, and NoSQL data marts (e.g., Elasticsearch/OpenSearch).</li>\n</ul>\n<ul>\n<li>Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.</li>\n</ul>\n<ul>\n<li>Remote work &amp; autonomy: Proven track record working in a remote or distributed environment, demonstrating self-motivation, reliable execution, and the ability to make sound technical decisions independently.</li>\n</ul>\n<ul>\n<li>Agile methodology: Working knowledge of Agile development practices and workflows (e.g., sprint planning, stand-ups, retrospectives) in a distributed team setting.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.</li>\n</ul>\n<p>Preferred Skills &amp; Experience</p>\n<ul>\n<li>Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.</li>\n</ul>\n<ul>\n<li>Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.</li>\n</ul>\n<ul>\n<li>Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.</li>\n</ul>\n<ul>\n<li>Front-end expertise: Experience building user interfaces with Angular or a modern UI stack.</li>\n</ul>\n<ul>\n<li>Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c70414d-4e6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955321460","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Snowflake","dbt","AWS","Terraform","modern UI technologies","data warehouse technology","SQL","unit testing","CI/CD automation","quality assurance processes"],"x-skills-preferred":["machine learning","AI","large language models","agentic frameworks","ELK stack","search and analytics solutions","cloud expertise","AWS cloud services","SageMaker","CI/CD tooling","front-end expertise","Angular","financial domain knowledge"],"datePosted":"2026-04-18T22:13:54.584Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, Karnataka, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Snowflake, dbt, AWS, Terraform, modern UI technologies, data warehouse technology, SQL, unit testing, CI/CD automation, quality assurance processes, machine learning, AI, large language models, agentic frameworks, ELK stack, search and analytics solutions, cloud expertise, AWS cloud services, SageMaker, CI/CD tooling, front-end expertise, Angular, financial domain knowledge"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80dbb0f6-e54"},"title":"Senior Security Engineer","description":"<p>We are seeking a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies. This role is suited for an experienced Windows Engineer with proven understanding in enterprise security and will focus on building toolsets and processes to support the Information Security Program (ISP).</p>\n<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm&#39;s information and computer systems.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Provide a high level of security consultancy and engineering support for Windows/Active Directory/Azure security solutions including analysis and development of Windows security solutions.</li>\n<li>Strong understanding of modern authentication protocols, e.g., OIDC / OAUTH 2.</li>\n<li>Contribute to the vision, strategy, and drive design and implementation for authentication platforms both on premises and in the cloud.</li>\n<li>Provide security consultancy and engineering support for SAML, OIDC and Kerberos authentication across different Identity providers, including analysis and development of SSO, PKI, and other authentication solutions.</li>\n<li>Able to demonstrate clear understanding of current risks and threats related to Identity Management at technical and managerial levels.</li>\n<li>Actively monitor new and emerging security and privacy related technologies, trends, issues, and solutions and assess their applicability to key business initiatives and strategies.</li>\n<li>Participate in Information Security Incident Response activities for the Firm&#39;s environment.</li>\n<li>Liaison with key stakeholders to create and enforce policy including Technology organization, Trading units, Legal, Internal Audit, and Compliance.</li>\n<li>Provide support to Security and other technical operations staff to ensure smooth turnover from Engineering to Production - and provide mentoring to junior level security professionals.</li>\n<li>Develop and maintain documentation of all Security products including specific tools, technologies, and processes.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>Bachelor&#39;s degree in computer science or engineering preferred.</li>\n<li>7 + years&#39; experience working in a technical role with a minimum of 2 + years&#39; experience focusing on information security in the financial industry (preferred).</li>\n<li>Excellent understanding and experience of engineering Microsoft security solutions – including desktop and server operating systems, EntraID, Active Directory, Group Policy, Desired Configuration State, DNS, Messaging.</li>\n<li>Ability to understand code in C#/.NET and / or Python and strong scripting experience in PowerShell.</li>\n<li>Experience managing IaaS, SaaS solutions and services using CI/CD pipelines. Jenkins, Terraform experience is a strong plus.</li>\n<li>Solid understanding of SAML, OIDC and Kerberos authentication and related technology controls and best practices.</li>\n<li>Experience with Office 365 security controls including usage of Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, and Microsoft Authenticator.</li>\n<li>Understanding and experience with implementing Data Loss Prevention (DLP) solutions, policies, and technologies.</li>\n<li>Understanding of Azure Information Protection (AIP) and its components, including labeling, classification, and encryption.</li>\n<li>Ability to develop and implement strategies to ensure compliance with data protection regulations, such as GDPR or HIPAA, utilizing DLP and AIP solutions.</li>\n<li>Strong knowledge and experience in a variety of security technologies including: EDR, SIEM, Vulnerability Management is a plus.</li>\n<li>Relevant security certification (CISSP, GCIA, CISM, etc.) and/or product certifications (PingFederate, Azure, Windows, AD etc.) a plus.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80dbb0f6-e54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755944784476","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["security technologies","tools","methodologies","Windows security solutions","OIDC / OAUTH 2","SAML","Kerberos authentication","Identity providers","SSO","PKI","EDR","SIEM","Vulnerability Management"],"x-skills-preferred":["C#/.NET","Python","PowerShell","Jenkins","Terraform","Azure Active Directory","Conditional Access","o365 logging APIs","Microsoft CAS","Microsoft Authenticator","Data Loss Prevention (DLP)","Azure Information Protection (AIP)"],"datePosted":"2026-04-18T22:12:55.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"security technologies, tools, methodologies, Windows security solutions, OIDC / OAUTH 2, SAML, Kerberos authentication, Identity providers, SSO, PKI, EDR, SIEM, Vulnerability Management, C#/.NET, Python, PowerShell, Jenkins, Terraform, Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, Microsoft Authenticator, Data Loss Prevention (DLP), Azure Information Protection (AIP)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1963e2d1-add"},"title":"Cloud DevOps Engineer","description":"<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>\n<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>\n<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>\n<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>\n<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>\n<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Advanced degree in computer science or any other scientific field</li>\n<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>\n<li>AWS Cloud infrastructure design, implementation, and support</li>\n<li>Experience with multiple AWS services</li>\n<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>\n<li>Knowledge of Python (Flask/FastAPI/Django)</li>\n<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>\n<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>\n<li>Strong knowledge of Unix or Linux</li>\n<li>Strong communication skills to collaborate with various stakeholders</li>\n<li>Able to work independently in a fast-paced environment</li>\n<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n<li>Experience working in a production environment</li>\n<li>Some experience with relational and non-relational databases</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>\n<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1963e2d1-add","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955154859","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD","AWS Cloud infrastructure design, implementation, and support","Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation","Python (Flask/FastAPI/Django)","Containerization for applications and their subsequent orchestration within Kubernetes environments"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:31.979Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21f5f6c3-734"},"title":"Data Engineer","description":"<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>\n<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>\n<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>\n<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>\n<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>\n<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>\n<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>\n<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>\n<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>\n<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>\n<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>\n<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21f5f6c3-734","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-engineer","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["Data Engineering","Cloud environments","dbt","Airbyte/Fivetran","BigQuery","GCP ecosystem","Infrastructure-as-Code","Terraform","Airflow","Dagster","Python","SQL","CI/CD best practices","DevOps practices"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:06.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62c461dc-a98"},"title":"Lead Cloud Engineer","description":"<p>For Digital Hub Warsaw, we&#39;re looking for a Lead Cloud Engineer to join our team. As a visionary company, we&#39;re driven to solve the world&#39;s toughest challenges and strive for a world where &#39;Health for all, Hunger for none&#39; is no longer a dream, but a real possibility.</p>\n<p>We&#39;re building an enterprise-grade Infrastructure Operations Platform named VOPs to support the facilitation of most complex IT infrastructure operations for all IT teams at Bayer globally. Your responsibilities will include:</p>\n<p>Planning and Design: Join the team responsible for planning and running our VOPs platform. Leadership: Mentor a team of engineers, providing guidance and support in the implementation of cloud solutions. Collaboration with Stakeholders: Work closely with Squad Leads and other stakeholders to understand requirements and align integration strategies with business goals. Technical Oversight: Ensure that solutions are scalable, reliable, maintainable, and secure, adhering to best practices in IT architecture and in-line with Bayer&#39;s strategy. Documentation and Standards: Create, maintain, and review comprehensive documentation for processes, standards, and best practices. Intercultural Communication: Foster an environment of open communication and collaboration among diverse teams across different geographical locations.</p>\n<p>Our requirements include: Degree in Computer Science, Information Technology, or related field, or equivalent practical experience as an IT engineer. At least 6 years of experience in Azure (other clouds will be a plus). Proficiency in IT Architecture &amp; design, specifically in infrastructure automation, provisioning, and maintenance. Strong analytical skills with the ability to troubleshoot and resolve technical issues effectively, even under pressure. Familiarity with IaC (e.g., Terraform) and strong proficiency in Python. Linux command line tools and shell scripting. Experience with building IT systems in regulated environments. Integration and Automation Expertise: Knowledge of CI/CD processes and experience in building and deploying integration solutions (Azure DevOps, GitHub Repos, and GitHub Actions). Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Experience with API management and/or design will be appreciated. Intercultural Competence: Ability to work collaboratively in a multicultural environment, respecting diverse perspectives and fostering teamwork, establishing and maintaining a robust professional network. Language Proficiency: Fluent in English, both spoken and written.</p>\n<p>What we offer includes: A flexible, hybrid work model. Great workplace in a new modern office in Warsaw. Career development, 360° Feedback &amp; Mentoring programme. Wide access to professional development tools, trainings, &amp; conferences. Company Bonus &amp; Reward Structure. VIP Medical Care Package (including Dental &amp; Mental health). Holiday allowance (&#39;Wczasy pod gruszą&#39;). Life &amp; Travel Insurance. Pension plan. Co-financed sport card. FitProfit. Meals Subsidy in Office. Additional days off. Budget for Home Office Setup &amp; Maintenance. Access to Company Game Room equipped with table tennis, soccer table, Sony PlayStation 5, and Xbox Series X consoles setup with premium game passes, and massage chairs. Tailored-made support in relocation to Warsaw when needed.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62c461dc-a98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949973780545","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Azure","IT Architecture & design","Infrastructure automation","Provisioning","Maintenance","IaC (Terraform)","Python","Linux command line tools","Shell scripting","CI/CD processes","Azure DevOps","GitHub Repos","GitHub Actions","API management","API design"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:27.474Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Warsaw"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Azure, IT Architecture & design, Infrastructure automation, Provisioning, Maintenance, IaC (Terraform), Python, Linux command line tools, Shell scripting, CI/CD processes, Azure DevOps, GitHub Repos, GitHub Actions, API management, API design"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b447835-74a"},"title":"Senior DataOps Engineer - Revenue Management (all genders)","description":"<p><strong>Your future team</strong></p>\n<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>\n<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>\n<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>\n<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>\n<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>\n<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>\n<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b447835-74a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597559","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Terraform","Cloud platforms (AWS preferred)","ML model deployment (MLflow, SageMaker, or similar)"],"x-skills-preferred":["AI tools like Claude, Copilot, and Codex","Data Storage & Querying (S3, Redshift, Athena, DuckDB)","ML & Model Serving (MLflow, SageMaker, deployment APIs)","Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS)","Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools)","Ingestion (Kafka-based event systems, Airbyte, Fivetran)"],"datePosted":"2026-04-18T22:09:42.352Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage & Querying (S3, Redshift, Athena, DuckDB), ML & Model Serving (MLflow, SageMaker, deployment APIs), Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901202b0-bfa"},"title":"Product Security Engineer - Public Sector","description":"<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>\n<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>\n<p>You will:</p>\n<ul>\n<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>\n<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>\n<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>\n<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>\n<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>\n<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>\n<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>\n<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>\n</ul>\n<p>Ideally, you’d have:</p>\n<ul>\n<li>Proven experience as a Security Engineer with a focus on product security.</li>\n<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>\n<li>Strong understanding of modern Javascript application design.</li>\n<li>Production experience with Kubernetes backed services</li>\n<li>Hands-on experience with SAST and DAST tools and methodologies.</li>\n<li>Familiarity with terraform orchestration for infrastructure management.</li>\n<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>\n<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>\n<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>\n<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of Washington DC/Hawaii is: $205,700-$257,400 USD</p>\n<p>The base salary range for this full-time position in the location of St. Louis/Suffolk is: $171,600-$214,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901202b0-bfa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4651559005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,700-$257,400 USD (Washington DC/Hawaii), $171,600-$214,500 USD (St. Louis/Suffolk)","x-skills-required":["TypeScript","Python","Kubernetes","CI/CD","SAST","DAST","terraform orchestration"],"x-skills-preferred":["NodeJS","modern Javascript application design","Kubernetes backed services","SAST and DAST tools and methodologies","terraform orchestration for infrastructure management"],"datePosted":"2026-04-18T15:59:56.896Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"St. Louis, MO; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, terraform orchestration, NodeJS, modern Javascript application design, Kubernetes backed services, SAST and DAST tools and methodologies, terraform orchestration for infrastructure management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":171600,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_13667989-d19"},"title":"Staff Software Engineer, AI Developer Tooling","description":"<p>We&#39;re looking for a Staff Software Engineer to join our Platform Engineering team. As a key member of our team, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Define next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>\n<li>Drive the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>\n<li>Directly mentor software engineers ranging from new grads to experienced engineers.</li>\n<li>Proactively identify opportunities and drive improvements to software development practices, processes, tools, and languages.</li>\n<li>Present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>8+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>\n<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>\n<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>\n<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>\n<li>Show a track record of independent ownership of successful engineering projects.</li>\n<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>\n<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>\n<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>\n<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_13667989-d19","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4518088005","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["Cursor","Claude Code","OpenAI Codex","MS Copilot","Terraform","Docker","Kubernetes","NodeJS","NextJS","CircleCI","Helm","ArgoCD"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:55.277Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; Seattle, WA; New York, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS, CircleCI, Helm, ArgoCD","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_14499a71-fa9"},"title":"Software Engineer, Enterprise","description":"<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>\n<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>\n<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>\n<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>\n</ul>\n<ul>\n<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>\n</ul>\n<ul>\n<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>\n</ul>\n<ul>\n<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>\n</ul>\n<ul>\n<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>\n</ul>\n<ul>\n<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>\n</ul>\n<ul>\n<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>\n</ul>\n<p><strong>Ideal Candidate</strong></p>\n<ul>\n<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>\n</ul>\n<ul>\n<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>\n</ul>\n<ul>\n<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>\n</ul>\n<ul>\n<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>\n</ul>\n<ul>\n<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>\n</ul>\n<ul>\n<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>\n</ul>\n<ul>\n<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>\n</ul>\n<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_14499a71-fa9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4536653005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript","FastAPI","Flask","Express","NestJS","AWS","Azure","Kubernetes","Docker","Terraform","PostgreSQL","DynamoDB","GenAI","Model Integration","AI Agent Systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:48.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_43952002-812"},"title":"Software Engineer, AI Developer Tooling","description":"<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>\n<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>\n<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>\n<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>\n<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>\n<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>\n<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>\n<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>\n<li>A track record of independent ownership of successful engineering projects.</li>\n<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>\n<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>\n<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>\n<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>This role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_43952002-812","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4676936005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["software development","distributed systems","public cloud platforms","MCP servers","AI agents","standard infrastructure","containerization","deployment technologies","modern web frameworks","software engineering best practices","CI/CD tooling"],"x-skills-preferred":["Cursor","Claude Code","OpenAI Codex","MS Copilot","Terraform","Docker","Kubernetes","NodeJS","NextJS"],"datePosted":"2026-04-18T15:59:27.391Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; Seattle, WA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_859cb1cf-b9c"},"title":"Senior AI Infrastructure Engineer, Model Serving Platform","description":"<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>\n<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>\n<li>Build an internal platform to empower LLM capability discovery.</li>\n<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>\n<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>\n<li>Develop monitoring and observability solutions to ensure system health and performance.</li>\n<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>\n</ul>\n<p>Ideally you’d have:</p>\n<ul>\n<li>5+ years of experience building large-scale, high-performance backend systems.</li>\n<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>\n<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>\n<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>\n<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>\n<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>\n<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_859cb1cf-b9c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4520320005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","Go","Rust","C++","Docker","Kubernetes","AWS","GCP","Terraform"],"x-skills-preferred":["vLLM","SGLang","TensorRT-LLM","text-generation-inference"],"datePosted":"2026-04-18T15:58:51.977Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e0058690-78c"},"title":"Senior Software Engineer, GenAI Platform","description":"<p>As a Senior Software Engineer, you will lead the development of a large-scale GenAI Platform at Reddit.</p>\n<p>The Machine Learning Platform team at Reddit is a high-impact team that owns the infrastructure that powers recommendations, content discovery, user and content quantification, while directly impacting other teams such as Growth, Ads, Feeds, and Core Machine Learning teams.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Contributing to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</li>\n<li>Designing and developing ML and Generative AI systems in cloud-based production environments at scale.</li>\n<li>Building and managing enterprise-grade RAG applications using embeddings, vector search, and retrieval pipelines.</li>\n<li>Implementing and operationalizing agentic AI workflows with tool use using frameworks such as LangChain and LangGraph.</li>\n<li>Driving adoption of MLOps / LLMOps practices, including CI/CD automation, versioning, testing, and lifecycle management.</li>\n<li>Establishing best practices for observability, monitoring, evaluation, and governance of GenAI pipelines in production.</li>\n</ul>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>5+ years of experience in ML Engineering, AI Platform Engineering, or Cloud AI Deployment roles.</li>\n<li>Experience operating orchestration systems such as Kubernetes at scale.</li>\n<li>Deep experience with cloud-based technologies for supporting an ML platform, including tools like AWS, Google Cloud Storage, infrastructure-as-code (Terraform), and more.</li>\n<li>Proficiency with the common programming languages and frameworks of ML, such as Go, Python, etc.</li>\n<li>Excellent communication skills with the ability to articulate technical AI concepts to non-technical stakeholders.</li>\n<li>Strong focus on scalability, reliability, performance, and ease of use.</li>\n</ul>\n<p>Benefits include comprehensive healthcare benefits, income replacement programs, 401k with employer match, global benefit programs, family planning support, gender-affirming care, mental health &amp; coaching benefits, flexible vacation &amp; paid volunteer time off, and generous paid parental leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e0058690-78c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7753480","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$267,100 USD","x-skills-required":["ML Engineering","AI Platform Engineering","Cloud AI Deployment","Kubernetes","AWS","Google Cloud Storage","Terraform","Go","Python","LangChain","LangGraph"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:46.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML Engineering, AI Platform Engineering, Cloud AI Deployment, Kubernetes, AWS, Google Cloud Storage, Terraform, Go, Python, LangChain, LangGraph","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":267100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5196c4ac-d97"},"title":"Senior Software Engineer - Infrastructure and Tools","description":"<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>\n<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>\n<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>\n<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5196c4ac-d97","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6318503002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","Go","C++","Python","Cloud APIs","Cloud technologies","AWS","Azure","GCP","Docker","Kubernetes","Terraform"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:44.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_acef3d4c-b32"},"title":"Security Engineer, Product Security","description":"<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>\n<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, AWS, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>\n<p>You will:</p>\n<ul>\n<li>Leverage broad product security expertise to build and maintain software tooling that secures every layer of the modern AI/ML software ecosystem.</li>\n</ul>\n<ul>\n<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>\n</ul>\n<ul>\n<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>\n</ul>\n<ul>\n<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>\n</ul>\n<ul>\n<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>\n</ul>\n<ul>\n<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>\n</ul>\n<ul>\n<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>\n</ul>\n<ul>\n<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>\n</ul>\n<ul>\n<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>\n</ul>\n<p>Ideally, you’d have:</p>\n<ul>\n<li>Demonstrated ability to drive multi-month security initiatives independently, from problem definition through execution, without requiring significant direction.</li>\n</ul>\n<ul>\n<li>Proven experience as a Security Engineer with a focus on product security.</li>\n</ul>\n<ul>\n<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>\n</ul>\n<ul>\n<li>Strong understanding of modern Javascript application design.</li>\n</ul>\n<ul>\n<li>Production experience operating and securing AWS infrastructure at scale.</li>\n</ul>\n<ul>\n<li>Hands-on experience with SAST and DAST tools and methodologies.</li>\n</ul>\n<ul>\n<li>Familiarity with terraform orchestration for infrastructure management.</li>\n</ul>\n<ul>\n<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>\n</ul>\n<ul>\n<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>\n</ul>\n<ul>\n<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_acef3d4c-b32","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4643029005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$237,600-$297,000 USD","x-skills-required":["TypeScript","Python","AWS","CI/CD","SAST","DAST","Terraform"],"x-skills-preferred":["NodeJS","Kubernetes","Modern Javascript application design"],"datePosted":"2026-04-18T15:57:42.582Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA; Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, AWS, CI/CD, SAST, DAST, Terraform, NodeJS, Kubernetes, Modern Javascript application design","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":237600,"maxValue":297000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_230b25df-0f4"},"title":"Senior Software Engineer- Database Infrastructure","description":"<p>We are seeking a senior software engineer to join our Database Infrastructure team. As a member of this team, you will build and operate large-scale, reliable, and performant data systems using ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</p>\n<p>You will collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord. You will exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</p>\n<p>You will work with a talented team of engineers who have built one of the largest communication platforms in the world.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and operate large-scale, reliable, and performant data systems with ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</li>\n<li>Collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord.</li>\n<li>Exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</li>\n<li>Work with a talented team of engineers who have built one of the largest communication platforms in the world.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience with building distributed systems and datastore infrastructure.</li>\n<li>Experience with highly-available and distributed databases: e.g. ScyllaDB, Cassandra, BigTable, DynamoDB, CockroachDB, Postgres w/HA, etc.</li>\n<li>Proficiency with at least one statically-typed programming language: e.g. Rust, Go, Java, C, C++</li>\n<li>Strong operating systems, distributed systems, and concurrency control fundamentals.</li>\n<li>Familiarity with Linux internals.</li>\n<li>Comfortable working in fast-paced environments.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience with Cassandra or Scylla.</li>\n<li>Experience with Rust.</li>\n<li>Knowledge of DevOps tools like Salt, Terraform, or Kubernetes.</li>\n</ul>\n<p>Why Discord?</p>\n<p>Discord plays a uniquely important role in the future of gaming. We&#39;re a multi-platform, multi-generational, and multiplayer platform that helps people deepen their friendships around games and shared interests.</p>\n<p>We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank.</p>\n<p>Join us in our mission!</p>\n<p>Your future is just a click away!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_230b25df-0f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Discord","sameAs":"https://discord.com/","logo":"https://logos.yubhub.co/discord.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/discord/jobs/8200328002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$196,000 to $220,500 + equity + benefits","x-skills-required":["ScyllaDB","PostgreSQL","ElasticSearch","Linux","Rust","Distributed systems","Datastore infrastructure","Highly-available and distributed databases","Operating systems","Concurrency control fundamentals","Linux internals"],"x-skills-preferred":["Cassandra","Go","Java","C","C++","DevOps tools","Salt","Terraform","Kubernetes"],"datePosted":"2026-04-18T15:57:32.475Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ScyllaDB, PostgreSQL, ElasticSearch, Linux, Rust, Distributed systems, Datastore infrastructure, Highly-available and distributed databases, Operating systems, Concurrency control fundamentals, Linux internals, Cassandra, Go, Java, C, C++, DevOps tools, Salt, Terraform, Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":196000,"maxValue":220500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8482d0fc-285"},"title":"Senior Backend Engineer, Gitlab Delivery: Upgrades","description":"<p>As a Senior Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab reliably by building and maintaining the infrastructure, tooling, and automation behind our deployment options.</p>\n<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (Get), and the GitLab Operator to make GitLab easier to deploy, more secure by default, and scalable across major cloud providers and a wide range of customer environments.</p>\n<p>In this role, you&#39;ll partner closely with engineering teams and act as a bridge to customer needs, improving installation, upgrade, and day-to-day operations for production-grade GitLab deployments.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>Evolving Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support validated reference architectures for enterprise-scale deployments</li>\n</ul>\n<ul>\n<li>Building automation pipelines and observability into deployment tooling to validate, test, and operate GitLab across Kubernetes and other self-managed environments</li>\n</ul>\n<p>You&#39;ll maintain and evolve the Omnibus GitLab package to support reliable, production-ready self-managed deployments, improving deployment stability, increasing upgrade success rates, and reducing escalation rates.</p>\n<p>You&#39;ll develop and improve GitLab Helm Charts so core components integrate cleanly and scale across supported environments, reducing deployment friction, shortening time to deploy, and improving operational consistency at scale.</p>\n<p>You&#39;ll enhance the GitLab Environment Toolkit (Get), validated reference architectures, and the GitLab Operator for secure, Kubernetes-native lifecycle management, improving reliability, strengthening security baselines, and accelerating adoption in customer environments.</p>\n<p>You&#39;ll improve installation, upgrade, and operational workflows across deployment methods to create a consistent experience for self-managed customers, reducing operational overhead, lowering failure rates, and increasing consistency across deployment methods.</p>\n<p>You&#39;ll partner with Security to address vulnerabilities and deliver secure defaults and configurations in the deployment stack, reducing exposure to vulnerabilities and improving baseline security across self-managed deployments.</p>\n<p>You&#39;ll build and maintain automation and continuous integration and continuous delivery pipelines that validate and test Omnibus, Charts, Get, and the Operator, increasing release confidence, improving test coverage, and reducing regressions across deployment tooling.</p>\n<p>You&#39;ll work closely with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features into deployment methods and keep them reliable, scalable, and aligned with customer needs, improving delivery readiness and reducing operational issues after release.</p>\n<p>You&#39;ll guide architectural direction, mentor backend engineers, and contribute to the roadmap for self-managed delivery, improving technical quality, accelerating delivery effectiveness, and strengthening team execution.</p>\n<p>You&#39;ll have experience operating backend services in production, including deployment, monitoring, and maintenance in Kubernetes- and Helm-based environments.</p>\n<p>You&#39;ll have proficiency in Go for building observable and resilient services, with working knowledge of Ruby as a useful addition.</p>\n<p>You&#39;ll have hands-on practice with infrastructure as code, including tools such as Terraform, and with managing infrastructure across cloud providers such as Google Cloud Platform, Amazon Web Services, or Microsoft Azure.</p>\n<p>You&#39;ll have knowledge of database design, operations, and troubleshooting, especially for PostgreSQL in secure and scalable setups.</p>\n<p>You&#39;ll have knowledge of secure, scalable, and reliable deployment practices, including service scaling and rollout strategies.</p>\n<p>You&#39;ll have familiarity with observability tools and patterns such as Prometheus and Grafana to monitor system health and performance.</p>\n<p>You&#39;ll have ability to work effectively in large codebases and coordinate across distributed, cross-functional teams using clear written communication.</p>\n<p>You&#39;ll have openness to transferable experience from related backend or infrastructure roles, along with the ability to write user-focused documentation and implementation guides.</p>\n<p>The Upgrades team is part of GitLab Delivery and focuses on helping self-managed customers run GitLab successfully in their own environments, from smaller deployments to large enterprise footprints.</p>\n<p>We own deployment and operational tooling across our work on Omnibus GitLab, Helm Charts, Get, and the GitLab Operator, and we work as a globally distributed, all-remote group that works asynchronously with Site Reliability Engineering, Release, Security, and Development teams across regions.</p>\n<p>We are focused on making self-managed GitLab easier to deploy, upgrade, secure, and operate at scale.</p>\n<p>For more on how we work, see Team Handbook Page.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8482d0fc-285","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8463933002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Ruby","Terraform","Google Cloud Platform","Amazon Web Services","Microsoft Azure","PostgreSQL","Prometheus","Grafana"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:31.988Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Ruby, Terraform, Google Cloud Platform, Amazon Web Services, Microsoft Azure, PostgreSQL, Prometheus, Grafana"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ae6df2c2-eb1"},"title":"DevOps Engineer, Infrastructure & Security","description":"<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>\n<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>\n<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>\n<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>\n<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>\n<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>\n<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>\n<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ae6df2c2-eb1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4674863005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,600-$307,000 USD","x-skills-required":["CI/CD","Kubernetes","Terraform","Docker","Python","Bash","PowerShell","Jenkins","GitLab CI","GitHub Actions","Azure DevOps","AWS","Azure","GCP","Security best practices"],"x-skills-preferred":["Containerization technologies","Machine learning lifecycles","MLOps concepts","Prior experience in classified environments"],"datePosted":"2026-04-18T15:57:24.917Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245600,"maxValue":307000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f706224-663"},"title":"Specialist Solutions Architect - Cloud Infrastructure & Security","description":"<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>\n<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>\n<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>\n</ul>\n<ul>\n<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>\n</ul>\n<ul>\n<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>\n</ul>\n<ul>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>\n</ul>\n<ul>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n</ul>\n<ul>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>\n</ul>\n<ul>\n<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>\n</ul>\n<ul>\n<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>\n</ul>\n<ul>\n<li>Networking: Architecture design, implementation, and performance</li>\n</ul>\n<ul>\n<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>\n</ul>\n<ul>\n<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>\n</ul>\n<ul>\n<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>\n</ul>\n<ul>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<ul>\n<li>Security - understanding how to secure data platforms and manage identities</li>\n</ul>\n<ul>\n<li>Complex deployments</li>\n</ul>\n<ul>\n<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>\n</ul>\n<ul>\n<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>\n</ul>\n<ul>\n<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>\n</ul>\n<ul>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n</ul>\n<ul>\n<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>Zone 2 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 3 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 4 Pay Range $264,000-$363,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f706224-663","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477197002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$264,000-$363,000 USD","x-skills-required":["Cloud Platforms & Architecture","Security","Networking","Platform Administration","Infrastructure Automation and InfraOps","Big Data technologies","Cloud Native Architecture","Serverless Architecture","Gen AI & Model Security","Encryption","Vulnerability Management","Compliance","SCIM","OAuth","SAML","Federation","High availability and disaster recovery","Cluster management","Observability","Logging","Monitoring","Audit","Cost management","Terraform"],"x-skills-preferred":["Python","Java","Scala","SQL","Terraform experience"],"datePosted":"2026-04-18T15:56:46.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Central - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Platforms & Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI & Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264000,"maxValue":363000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ed46937-df6"},"title":"Staff Developer Success Engineer - West","description":"<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>\n<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>\n<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>\n<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>\n<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>\n<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>\n<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ed46937-df6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$170,000 - $215,000","x-skills-required":["cloud-native infrastructure","container orchestration","networking","observability","infrastructure automation","Terraform","IaC","Kubernetes","AWS","GCP","Python","Java","Go","Grafana","Prometheus"],"x-skills-preferred":["security certificate management","security implementation","use case analysis","Temporal design decisions","architecture best practices","EKS","GKE","OpenTracing","Ansible","CDK"],"datePosted":"2026-04-18T15:56:34.606Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":215000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f24aa64a-8e9"},"title":"DevOps Engineer, GPS","description":"<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>\n<p>You will:</p>\n<ul>\n<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>\n</ul>\n<ul>\n<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>\n</ul>\n<ul>\n<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>\n</ul>\n<ul>\n<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>\n</ul>\n<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f24aa64a-8e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4613839005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Backend Development","System Ownership","Infrastructure Automation","Deployment and Scalability","Disaster Recovery and Hybrid Strategies","Cloud-Native AI Infrastructure","Terraform","CloudFormation","Kubernetes","Python","Typescript","Javascript","C++"],"x-skills-preferred":["Collaboration and Standards","Networking Architecture","CI/CD Pipelines","Containerized Applications","Orchestration Platforms","Data Abstraction","Data Pipelines","Identity & Access Management","Security Tools"],"datePosted":"2026-04-18T15:56:30.346Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity & Access Management, Security Tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8447826b-717"},"title":"Senior Systems Integration Engineer","description":"<p>EarnIn is scaling its systems, automations, and data capabilities to power its people and protect its information. As a Senior Systems Integration Engineer, you will be a hands-on technical lead focused on Python-driven automation, building systems integrations between HRIS, Identity Provider, SaaS, and Finance Platform, and transforming operational data into actionable insights and dashboards.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, build, and maintain production-grade automations and internal tools in Python to eliminate manual work across identity, endpoint, and SaaS operations.</li>\n<li>Develop resilient API integrations and event-driven workflows (webhooks, queues) with robust error handling, retries, and observability; package reusable libraries and CLIs that standardize how IT automates.</li>\n<li>Codify repeatable infrastructure with Terraform; manage changes via Git and CI/CD (e.g., GitHub Actions).</li>\n</ul>\n<ul>\n<li>Build and operate integrations between HRIS/IdP/SaaS and financial platforms (e.g., NetSuite, Carta, Expensify), ensuring data quality, lineage, and reconciliation across systems.</li>\n<li>Create and maintain lightweight services that normalize and enrich data flows to power business intelligence and compliance reporting (Tableau/Power BI/Looker Studio).</li>\n</ul>\n<ul>\n<li>Define KPIs/SLIs/SLOs for core IT services (availability, compliance, MTTR, deflection, time-to-productive-employee) and implement monitoring/alerting.</li>\n<li>Build data warehouses (e.g., Databricks) and write SQL against them (e.g., BigQuery) and build self-serve dashboards for IT, Security, Finance, People Ops, and Engineering; instrument pipelines for accuracy and freshness.</li>\n</ul>\n<ul>\n<li>Deliver repeatable, audit-ready evidence for controls via dashboards and scheduled reports.</li>\n</ul>\n<ul>\n<li>Evaluate and deploy AI tools with guardrails to boost IT productivity; automate helpdesk workflows (triage, summarization, routing, knowledge search).</li>\n<li>Define and track value metrics (adoption, deflection, CSAT, MTTR, time saved); iterate based on experiments and user feedback.</li>\n</ul>\n<ul>\n<li>Implement and sustain controls mapped to SOC 2 and PCI (as applicable) with repeatable evidence collection.</li>\n<li>Define and review SLIs/SLOs; add monitoring/alerting, config drift detection, and incident runbooks.</li>\n</ul>\n<ul>\n<li>Lead cross-functional projects with Security, People Ops, Finance, and Engineering , from design through steady state.</li>\n<li>Mentor junior engineers through design and code reviews; publish clear documentation that makes the reliable path the easy path.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8447826b-717","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7703637","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","API/OpenAPI","event-driven workflows","SQL","Infrastructure as Code (Terraform)","Git-based change management","security mindset"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:30.147Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, Mexico"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, API/OpenAPI, event-driven workflows, SQL, Infrastructure as Code (Terraform), Git-based change management, security mindset"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7de81b4-bec"},"title":"Security Engineer, Infrastructure","description":"<p>We are seeking a highly skilled Infrastructure Security Engineer to join our team. This role is integral to ensuring the security and integrity of our platform.</p>\n<p>You will be responsible for securing large cloud environments, orchestrating and securing various compute clusters, and reviewing infrastructure as code. Your expertise in cloud security, infrastructure automation, and advanced security practices will be essential in maintaining and enhancing our security posture.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Securing infrastructure across large cloud hosting providers (e.g., AWS, Azure, GCP).</li>\n<li>Implementing and maintaining robust security configurations and policies for cloud environments.</li>\n<li>Conducting regular security assessments and audits of infrastructure to identify vulnerabilities and areas for improvement.</li>\n<li>Developing and enforcing security best practices for infrastructure automation and orchestration.</li>\n<li>Collaborating with DeveloperExperience, IT, and product teams to integrate security into all stages of the infrastructure lifecycle.</li>\n<li>Reviewing and securing infrastructure as code (e.g., Terraform, CloudFormation).</li>\n<li>Educating and mentoring team members on infrastructure security best practices and emerging threats.</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>Proven experience as a Security Engineer with a focus on product security.</li>\n<li>Proficiency in NodeJS, TypeScript, and Kubernetes.</li>\n<li>Experience with orchestrating and securing GPU clusters.</li>\n<li>Proficiency in infrastructure as code tools such as Terraform and CloudFormation.</li>\n<li>Excellent communication skills, with the ability to clearly explain technical concepts and their implications to both technical and non-technical stakeholders.</li>\n<li>Demonstrated ability to influence security strategies and drive improvements within an organisation.</li>\n<li>Relevant security certifications (e.g., AWS Certified Security Specialty, Certified Cloud Security Professional) are a plus.</li>\n<li>Experience in a senior or lead security role is preferred.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7de81b4-bec","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4646888005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$237,600-$297,000 USD","x-skills-required":["cloud security","infrastructure automation","advanced security practices","NodeJS","TypeScript","Kubernetes","Terraform","CloudFormation"],"x-skills-preferred":["orchestrating and securing GPU clusters","relevant security certifications"],"datePosted":"2026-04-18T15:56:27.426Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA; Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud security, infrastructure automation, advanced security practices, NodeJS, TypeScript, Kubernetes, Terraform, CloudFormation, orchestrating and securing GPU clusters, relevant security certifications","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":237600,"maxValue":297000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3e231b3e-949"},"title":"Forward Deployed AI Engineering Manager, Enterprise","description":"<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>\n<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>\n<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>\n<p><strong>Customer Integration &amp; Deployment</strong></p>\n<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>\n<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>\n<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>\n<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>\n<p><strong>AI Agent Development</strong></p>\n<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>\n<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>\n<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>\n<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>\n<p><strong>Prompt Engineering &amp; Optimization</strong></p>\n<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>\n<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>\n<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>\n<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>\n<p><strong>Leadership &amp; Collaboration</strong></p>\n<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>\n<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>\n<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>\n<p>Document technical architectures, integration patterns, and best practices.</p>\n<p><strong>Problem Solving &amp; Innovation</strong></p>\n<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>\n<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>\n<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>\n<p>Identify opportunities for productization based on common customer patterns.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3e231b3e-949","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4602177005","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","Production","Data Structures","Algorithms","System Design","Cloud Platforms","Modern Data Infrastructure","Problem-Solving","Communication"],"x-skills-preferred":["LLMs","Prompting Techniques","Embeddings","RAG Architectures","Vector Databases","Semantic Search Systems","Containerization","CI/CD Pipelines","Terraform","Bicep","Infrastructure as Code"],"datePosted":"2026-04-18T15:56:13.908Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_16599c27-a87"},"title":"Senior Infrastructure Engineer/SRE","description":"<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>\n<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>\n<li>Flexible PTO to take the time you need, when you need it</li>\n<li>Paid parental leave for all new parents welcoming a new child</li>\n<li>Retirement savings plan to help you plan for the future</li>\n<li>Remote work setup budget to help you create a productive home office</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced</li>\n<li>In-office meal program and commuter benefits provided for onsite employees</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>\n<p>OTE Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_16599c27-a87","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5137153008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000","x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:52.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccb5daf2-354"},"title":"Sr. ML Ops Engineer, tvScientific","description":"<p>We&#39;re looking for a Senior MLOps Engineer to join our distributed engineering team on our Connected TV ad-buying platform. As a Senior MLOps Engineer, you will be responsible for scaling the decision-making process for tools for the tvScientific AI team, improving the developer experience for the data science team, upgrading our observability tooling, serving as a technical lead and mentor to the team, and making every deployment smooth as our infrastructure evolves.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Scaling the decision-making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>\n<li>Improving the developer experience for the data science team</li>\n<li>Upgrading our observability tooling</li>\n<li>Serving as a technical lead and mentor to the team</li>\n<li>Making every deployment smooth as our infrastructure evolves</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Deep understanding of Linux</li>\n<li>Excellent writing skills</li>\n<li>A systems-oriented mindset</li>\n<li>Experience in high-performance software (RTB, HFT, etc.)</li>\n<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>\n<li>Strong observability instincts</li>\n<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>\n<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>\n<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Reverse-engineering experience</li>\n<li>Terraform, EKS, or MLOps experience</li>\n<li>Python, Scala, or Zig experience</li>\n<li>NixOS experience</li>\n<li>Adtech or CTV experience</li>\n<li>Experience deploying a distributed system across multiple clouds</li>\n<li>Experience in hard real-time low-latency</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccb5daf2-354","directApply":true,"hiringOrganization":{"@type":"Organization","name":"tvScientific","sameAs":"https://www.tvscientific.com/","logo":"https://logos.yubhub.co/tvscientific.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7642249","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155,584-$320,320 USD","x-skills-required":["Linux","writing skills","systems-oriented mindset","high-performance software","software engineering","reliability","observability","AI","critical evaluation","verification","data protection","data validation","peer review"],"x-skills-preferred":["reverse-engineering","Terraform","EKS","MLOps","Python","Scala","Zig","NixOS","adtech","CTV","distributed system","hard real-time low-latency"],"datePosted":"2026-04-18T15:55:03.102Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, writing skills, systems-oriented mindset, high-performance software, software engineering, reliability, observability, AI, critical evaluation, verification, data protection, data validation, peer review, reverse-engineering, Terraform, EKS, MLOps, Python, Scala, Zig, NixOS, adtech, CTV, distributed system, hard real-time low-latency","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155584,"maxValue":320320,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86dc459d-a0f"},"title":"Senior Software Engineer, Platform as a Service","description":"<p>We are seeking a technical, hands-on, empathetic senior software engineer to help define and deliver our Platform as a Service (PAAS) mission. As a senior engineer on the PAAS team, you will collaborate with the team to deliver forward-looking, customer-centric tooling. Your expertise in building and using best-in-class infrastructure tools will equip our engineering organisation with tools to move quickly and deliver features that bring millions of people together.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working with customer engineering teams to ensure we’re building solutions that developers love using day-in and day-out</li>\n<li>Collaborating with the Internal Development Experience (IDX) team to ensure an easy path to go from development through staging into production</li>\n<li>Working with the Platform Security team in order to secure every path to production</li>\n<li>Shipping Rust code to YAY, our in-house deployment tooling built around Google Kubernetes Engine and Temporal</li>\n<li>Exposing the full flexibility of Kubernetes for users while abstracting the complexities away</li>\n<li>Building tools to manage the configuration, observability, and scaling characteristics of our infrastructure</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of experience in software development with a focus on tooling, infrastructure, and automation</li>\n<li>Experience working in multi-milestone and even multi-quarter projects</li>\n<li>Expertise and empathy when troubleshooting issues with customer engineering teams</li>\n<li>Expertise using and building upon the primitives of standard cloud infrastructure tooling like Kubernetes, Docker</li>\n<li>Experience developing in cloud-based environments (we use Google Cloud; knowledge of Amazon Web Services and/or Azure also great!)</li>\n<li>Experience with infrastructure-as-code tooling (we use Terraform)</li>\n</ul>\n<p>Bonus points for experience with CI, build, and deployment technologies like Buildkite, Bazel, and Terraform, as well as cloud networking tools like istio, envoy, etc. and application observability tools like Datadog and/or Sentry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86dc459d-a0f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Discord","sameAs":"https://discord.com","logo":"https://logos.yubhub.co/discord.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/discord/jobs/8409021002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$196,000 to $220,500 + equity + benefits","x-skills-required":["Rust","Kubernetes","Docker","Terraform","Google Cloud","Amazon Web Services","Azure","CI/CD","infrastructure-as-code"],"x-skills-preferred":["Buildkite","Bazel","istio","envoy","Datadog","Sentry"],"datePosted":"2026-04-18T15:54:51.444Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Kubernetes, Docker, Terraform, Google Cloud, Amazon Web Services, Azure, CI/CD, infrastructure-as-code, Buildkite, Bazel, istio, envoy, Datadog, Sentry","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":196000,"maxValue":220500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ae849446-fe5"},"title":"Site Reliability Engineer - Cybersecurity","description":"<p><strong>About the Role</strong></p>\n<p>The Cybersecurity / SRE team at xAI is focused on ensuring the security and reliability of X Money. This role will primarily focus on the X Money platform but will also cross over with the X Social platform.</p>\n<p>You&#39;ll be responsible for securing and maintaining the reliability of X Money&#39;s infrastructure. You&#39;ll work closely with cross-functional teams to enhance security measures, improve system resilience, and implement best practices.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and secure mission-critical applications in a hybrid cloud environment.</li>\n<li>Manage identities and roles effectively.</li>\n<li>Monitor and remediate infrastructure to comply with regulations and best practices (e.g., PCI, NIST CSF).</li>\n<li>Maintain a SIEM and all data pipelines needed for reliable alerting.</li>\n<li>Design and implement secure container standards and automation to enable frictionless developer workflows.</li>\n<li>Maintain Kubernetes security aligned with current best practices.</li>\n<li>Build, deploy, and maintain security operations infrastructure using Python, Terraform, and Puppet.</li>\n<li>Secure and enhance CI/CD pipelines.</li>\n<li>Integrate and maintain code scanning platforms.</li>\n<li>Develop dashboards and alerts from security metrics.</li>\n<li>Own security projects: identify issues and implement solutions.</li>\n<li>Apply critical analysis and problem-solving skills.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven experience securing hybrid AWS/on-premises environments, including IAM and overall security posture.</li>\n<li>Strong proficiency in Python, Terraform, and Puppet.</li>\n<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>\n<li>Deep expertise in Kubernetes and container security.</li>\n<li>Hands-on expertise building GitHub Actions and workflows.</li>\n<li>Extensive experience with Prometheus, Grafana, CloudWatch, and Karma.</li>\n<li>Well versed in management and integrations of Wazuh</li>\n<li>Hands-on experience with security scanning tools (Semgrep, Trivy, Falco).</li>\n<li>Proactive mindset with strong ownership and problem-solving skills.</li>\n<li>Excellent critical thinking and analytical abilities.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ae849446-fe5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4803447007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Python","Terraform","Puppet","Kubernetes","container security","GitHub Actions","Prometheus","Grafana","CloudWatch","Karma","Wazuh","security scanning tools","critical analysis","problem-solving skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:39.097Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Terraform, Puppet, Kubernetes, container security, GitHub Actions, Prometheus, Grafana, CloudWatch, Karma, Wazuh, security scanning tools, critical analysis, problem-solving skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0df50e1-9cd"},"title":"Consultant, Developer Platform","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a Cloud Engineer for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>\n</ul>\n<ul>\n<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>\n</ul>\n<ul>\n<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>\n</ul>\n<ul>\n<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>\n</ul>\n<ul>\n<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>\n</ul>\n<ul>\n<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>\n</ul>\n<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>\n</ul>\n<ul>\n<li>Demonstrated experience with:</li>\n</ul>\n<ul>\n<li>Developing serverless code in a CI/CD pipeline using an Agile methodology.</li>\n</ul>\n<ul>\n<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP.</li>\n</ul>\n<ul>\n<li>Scripting languages.</li>\n</ul>\n<ul>\n<li>A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills.</li>\n</ul>\n<ul>\n<li>Infrastructure as code tools like Terraform.</li>\n</ul>\n<ul>\n<li>Strong experience with APIs.</li>\n</ul>\n<ul>\n<li>CI/CD pipelines using Azure DevOps or Git.</li>\n</ul>\n<ul>\n<li>Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc.</li>\n</ul>\n<ul>\n<li>Good understanding and knowledge of:</li>\n</ul>\n<ul>\n<li>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs.</li>\n</ul>\n<ul>\n<li>Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP.</li>\n</ul>\n<ul>\n<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>You have worked with a Cybersecurity company or products and have performed migrations using migration tools.</li>\n</ul>\n<ul>\n<li>You have developed application security and performance capabilities.</li>\n</ul>\n<ul>\n<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</li>\n</ul>\n<ul>\n<li>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0df50e1-9cd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7383015","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Developing serverless code in a CI/CD pipeline using an Agile methodology","Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP","Scripting languages","Infrastructure as code tools like Terraform","Strong experience with APIs","CI/CD pipelines using Azure DevOps or Git","Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc","Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs","Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP","Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"],"x-skills-preferred":["You have worked with a Cybersecurity company or products and have performed migrations using migration tools","You have developed application security and performance capabilities","Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty","The work will be performed in English. Fluency in a second regional European language is a strong advantage"],"datePosted":"2026-04-18T15:54:26.532Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3, You have worked with a Cybersecurity company or products and have performed migrations using migration tools, You have developed application security and performance capabilities, Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty, The work will be performed in English. Fluency in a second regional European language is a strong advantage"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_48e2e160-bde"},"title":"Senior Solutions Architect - Weights & Biases","description":"<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>\n<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>\n<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>\n<ul>\n<li>You love diving into infrastructure problems and solving them systematically</li>\n<li>You’re curious about how to scale complex ML systems in production environments</li>\n<li>You’re an expert in building and running containerized, distributed systems</li>\n</ul>\n<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>We offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_48e2e160-bde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4622845006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 to $200,000","x-skills-required":["Docker","Kubernetes","Helm charts","Networking","Cloud-managed services (e.g., MySQL, Object Stores)","Infrastructure as Code (IaC), preferably Terraform","Linux/Unix command line experience","Python","ML workflows or tools"],"x-skills-preferred":["Deep proficiency in Kubernetes design patterns, including Operators","Familiarity with data engineering and MLOps tooling","Experience as an educator or facilitator for technical training sessions, workshops, or demos","SaaS, web service, or distributed systems operations experience"],"datePosted":"2026-04-18T15:54:07.692Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9be280f4-cbc"},"title":"Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>\n<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>\n<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>\n<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9be280f4-cbc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008","x-work-arrangement":"onsite","x-experience-level":"entry|mid|senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["backend language (Python or Rust)","distributed compute frameworks (Apache Spark or Ray)","cloud infrastructure","data lake architectures","batch and streaming pipelines"],"x-skills-preferred":["Kafka","dbt","Terraform","Airflow","web crawler","deduplication","data mining","search","file formats and storage systems"],"datePosted":"2026-04-18T15:54:00.309Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76d3f53b-3c6"},"title":"Staff Software Engineer, Quality and Release Platform","description":"<p>About Us</p>\n<p>We&#39;re looking for a Staff Software Engineer to join our Quality and Release Platform (QARP) team and lead the technical direction of the platforms that power how dbt Labs builds, tests, and ships software.</p>\n<p>Our mission spans two critical areas: release engineering , making it easy for engineers to ship changes quickly, safely, and reliably , and code quality , building a platform that raises the bar for code quality across all of dbt Labs engineering.</p>\n<p>In this role, you&#39;ll work with tools like Helm, ArgoCD, Terraform, Python, GitHub Actions, and Kargo to architect and scale our deployment systems, while also helping design and build the tooling, frameworks, and automation that enable engineering teams to consistently produce high-quality code.</p>\n<p>This is a high-impact, staff-level role where you&#39;ll set architectural direction, mentor engineers, and drive initiatives that improve developer velocity, code quality, and reliability across the entire engineering organization.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Define and drive the technical strategy and architecture for our CI/CD platform, release management systems, and code quality platform.</li>\n</ul>\n<ul>\n<li>Design and build tooling, frameworks, and automation that help engineering teams maintain and improve code quality across the organization.</li>\n</ul>\n<ul>\n<li>Lead high-impact initiatives that improve automation, observability, and self-service capabilities for engineers across the organization.</li>\n</ul>\n<ul>\n<li>Mentor and level up other engineers on the team, fostering a culture of technical excellence and continuous improvement.</li>\n</ul>\n<ul>\n<li>Collaborate across teams and with engineering leadership to identify systemic challenges in our delivery and quality processes and architect solutions to address them.</li>\n</ul>\n<ul>\n<li>Evolve our release architecture to support dbt Cloud&#39;s multi-cloud, cell-based infrastructure at scale.</li>\n</ul>\n<ul>\n<li>Establish best practices and standards for build pipelines, release workflows, code quality, and infrastructure-as-code that are adopted across engineering.</li>\n</ul>\n<ul>\n<li>Serve as a thought leader in engineering&#39;s internal AI strategy , evaluating AI-assisted development tools, defining adoption practices and guardrails, and enabling developers to use AI effectively across the org.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>8+ years of software engineering experience, with significant time in platform, infrastructure, release engineering, or developer tooling.</li>\n</ul>\n<ul>\n<li>A track record of leading technical strategy and architecture for complex, production-scale CI/CD, code quality, or platform systems.</li>\n</ul>\n<ul>\n<li>Deep experience with one or more of the following: Helm, ArgoCD, Terraform, GitHub Actions, or Kubernetes.</li>\n</ul>\n<ul>\n<li>Strong background in Python, Go, or Rust for automation, platform tooling, or systems development.</li>\n</ul>\n<ul>\n<li>Passion for code quality and experience building or improving tools, linters, static analysis, testing frameworks, or CI checks that help teams write better code.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to drive cross-team initiatives and influence engineering-wide practices and standards.</li>\n</ul>\n<ul>\n<li>Excellent communication skills , able to translate complex technical concepts for diverse audiences and lead through influence.</li>\n</ul>\n<ul>\n<li>Demonstrated interest or hands-on experience with AI-assisted development tools and practices, with a perspective on how AI can improve engineering productivity and code quality.</li>\n</ul>\n<ul>\n<li>Experience working asynchronously as part of a fully remote, distributed team.</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Experience with Kargo or similar progressive delivery systems.</li>\n</ul>\n<ul>\n<li>Hands-on experience with multi-cloud architectures (AWS, GCP, Azure).</li>\n</ul>\n<ul>\n<li>Experience building code quality platforms, static analysis tooling, or testing infrastructure at scale.</li>\n</ul>\n<ul>\n<li>Experience defining and rolling out engineering-wide code quality standards or best practices.</li>\n</ul>\n<ul>\n<li>A track record of improving developer productivity or release safety across a large engineering organization.</li>\n</ul>\n<ul>\n<li>Experience mentoring engineers and shaping team culture in a staff or principal-level role.</li>\n</ul>\n<ul>\n<li>Track record of evaluating, championing, and rolling out AI developer tools (e.g., Copilot, Cursor, Claude Code) within an engineering organization.</li>\n</ul>\n<ul>\n<li>Experience defining guidelines, guardrails, or best practices for AI-assisted development.</li>\n</ul>\n<p>Compensation &amp; Benefits</p>\n<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>\n<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>\n<ul>\n<li>The typical starting salary range for this role is: $207,000 - $251,000 USD</li>\n</ul>\n<ul>\n<li>The typical starting salary range for this role in the select locations listed is: $230,000 - $279,000 US</li>\n</ul>\n<p>Equity Stake Benefits</p>\n<ul>\n<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>\n</ul>\n<p>Our Hiring Process</p>\n<ul>\n<li>Interview with a Talent Acquisition Partner (30 Mins)</li>\n</ul>\n<ul>\n<li>Technical Interview with Hiring Manager (60 Mins)</li>\n</ul>\n<ul>\n<li>Team Interviews - Technical (3 rounds, 60 Mins each)</li>\n</ul>\n<ul>\n<li>Values Interview (30 Mins)</li>\n</ul>\n<p>#LI_RC1</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76d3f53b-3c6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4666468005","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Helm","ArgoCD","Terraform","Python","GitHub Actions","Kargo","Kubernetes"],"x-skills-preferred":["multi-cloud architectures","code quality platforms","static analysis tooling","testing infrastructure"],"datePosted":"2026-04-18T15:53:58.919Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Helm, ArgoCD, Terraform, Python, GitHub Actions, Kargo, Kubernetes, multi-cloud architectures, code quality platforms, static analysis tooling, testing infrastructure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1903386-87b"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>\n<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>\n<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>\n<p>Automate operations and engineering.</p>\n<p>Focus on automation so we can spend energy where it matters.</p>\n<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>\n<p>Deep proficiency with coding languages such as Golang or Python.</p>\n<p>Deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>\n<p>Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>\n<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>\n<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>\n<p>Production experience with database software such as PostgreSQL.</p>\n<p>Experience with GitOps tooling such as Flux or Argo.</p>\n<p>Experience with CI/CD such as GitHub Actions.</p>\n<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>\n<p>Compensation includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1903386-87b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4535898008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:57.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Germany (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26212e9e-5a8"},"title":"Infrastructure Engineer/SRE","description":"<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>\n<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>\n<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>\n<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>\n<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>\n<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>\n<li>Deep proficiency with coding languages such as Golang or Python.</li>\n<li>Deep familiarity with container-related security best practices.</li>\n<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>\n<li>Experience with GPU-enabled clusters is a bonus.</li>\n<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>\n<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>\n<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>\n<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>\n<li>Production experience with database software such as PostgreSQL.</li>\n<li>Experience with GitOps tooling such as Flux or Argo.</li>\n<li>Experience with CI/CD such as GitHub Actions.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>\n<li>Flexible vacation time to promote a healthy work-life blend.</li>\n<li>Paid parental leave to support you and your family.</li>\n</ul>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26212e9e-5a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5113847008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:55.875Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Australia (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bb321e04-e73"},"title":"Senior Full Stack Engineer - Team Web","description":"<p>We&#39;re looking for a Senior Full Stack Engineer to join Team Web, who is passionate about crafting intuitive front-end experiences and building the backend systems and tools that power them. You&#39;ll play a key role in shaping the future of our website across the full stack, from UI to infrastructure, while collaborating with product marketers, designers, and engineers across the business.</p>\n<p>As a Senior Full Stack Engineer, you&#39;ll design, build, and maintain end-to-end web solutions , from modern UIs to backend services, APIs, and infrastructure. You&#39;ll collaborate with design, brand, marketing, and content teams to deliver seamless, performant experiences across web and mobile. You&#39;ll develop backend logic and APIs, manage data flows, and implement systems that integrate with third-party platforms.</p>\n<p>You&#39;ll optimize website performance by applying best practices in front-end development, including lazy loading, and efficient asset management. You&#39;ll set up and manage infrastructure using tools like Vercel, AWS, Cloudfront, Terraform, and CI/CD pipelines (e.g., CircleCI). You&#39;ll implement and maintain web analytics, and support A/B testing for data-driven decisions.</p>\n<p>You&#39;ll stay current with emerging technologies and trends to continually improve our development processes and user experience. You&#39;ll be comfortable writing backend software. We look for engineers to be able to unblock themselves end to end.</p>\n<p>You&#39;ll build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bb321e04-e73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7276257","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["JavaScript","HTML","CSS","React","Next.js","Tailwind","CMS platforms (Contentful and Sanity)","marketing tools (Google Tag Manager, Marketo)","CI/CD tools (CircleCI)","infrastructure as code tools (Terraform)","cloud platforms (AWS, Vercel, CloudFront, S3)"],"x-skills-preferred":["A/B testing","analytics tools","performance optimization techniques","best practices for fast-loading, responsive websites","testing frameworks (Jest, Mocha, Cypress)"],"datePosted":"2026-04-18T15:53:49.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JavaScript, HTML, CSS, React, Next.js, Tailwind, CMS platforms (Contentful and Sanity), marketing tools (Google Tag Manager, Marketo), CI/CD tools (CircleCI), infrastructure as code tools (Terraform), cloud platforms (AWS, Vercel, CloudFront, S3), A/B testing, analytics tools, performance optimization techniques, best practices for fast-loading, responsive websites, testing frameworks (Jest, Mocha, Cypress)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ac95264-313"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>\n<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ac95264-313","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4802840008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","container-related security best practices","cert-manager","external-dns","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:47.350Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Romania (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bec4e006-74f"},"title":"Consultant, Developer Platform","description":"<p>About the role: Cloudflare provides advisory and hands-on-keyboard implementation and migration services for enterprise customers. As a Consultant for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>\n<p>You are an expert in Developer Platform products or equivalent and will focus on building and deploying serverless applications with scale, performance, security and reliability leveraging: Workers, Workers KV, Workers AI, D1, R2, Images, and many other products.</p>\n<p>This position has working hours Monday to Friday 09:00 a.m. to 06:00 p.m. Occasionally, we support our customers during the weekends for specific changes that need to be done outside of their business hours. Travel is expected to be around 40%.</p>\n<p>Experience might include a combination of the skills below:</p>\n<ul>\n<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>\n<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>\n<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>\n<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>\n<li>Proven experience with Cloudflare or similar with Workers, Javascript/Typescript and Workers APIs.</li>\n<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>\n<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>\n</ul>\n<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>\n<p>Experience might include a combination of the skills below:</p>\n<ul>\n<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>\n<li>Demonstrated experience with:</li>\n</ul>\n<p>Developing serverless code in a CI/CD pipeline using an Agile methodology. Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP Scripting languages A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills. Infrastructure as code tools like Terraform. Strong experience with APIs. CI/CD pipelines using Azure DevOps or Git. Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc. Good understanding and knowledge of:</p>\n<p>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs. Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP. Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</p>\n<p>Strong advantage if:</p>\n<p>You have worked with a Cybersecurity company or products and have performed migrations using migration tools. You have developed application security and performance capabilities. Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</p>\n<p>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bec4e006-74f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7383013","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Developing serverless code in a CI/CD pipeline using an Agile methodology","Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP","Scripting languages","Infrastructure as code tools like Terraform","Strong experience with APIs","CI/CD pipelines using Azure DevOps or Git","Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc","Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs","Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP","Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:29.137Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a438f945-411"},"title":"Senior Site Reliability Engineer (Resilience) - Platform Resilience","description":"<p>We&#39;re seeking a Senior Site Reliability Engineer (SRE) to join our Platform Engineering department. As an SRE, you will lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure. You will grow our global Platform infrastructure to meet increasing scaling demands by developing and maintaining software, tooling, and automations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop and maintain software, tooling, and automations to ensure the reliability and scalability of our global infrastructure.</li>\n</ul>\n<ul>\n<li>Lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure.</li>\n</ul>\n<ul>\n<li>Collaborate with engineers to identify, implement, and deliver solutions that meet the needs of our customers.</li>\n</ul>\n<ul>\n<li>Champion an environment focused on collaboration, operational excellence, and uplifting others.</li>\n</ul>\n<ul>\n<li>Respond to and prevent repeated customer impact in response to major incidents and prioritized problem management.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>\n</ul>\n<ul>\n<li>Background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions.</li>\n</ul>\n<ul>\n<li>Experience in public cloud and managed Kubernetes services is advantageous.</li>\n</ul>\n<ul>\n<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Operated a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>\n</ul>\n<ul>\n<li>Built or operated a Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>\n</ul>\n<ul>\n<li>Written non-trivial programs in Golang or other programming languages.</li>\n</ul>\n<ul>\n<li>Worked with containerized services (such as Docker).</li>\n</ul>\n<ul>\n<li>Proven experience in leading and improving alerting and major incident management standard processes metrics systems (e.g. Elastic Stack, Graphite, Prometheus, Influx) to diagnose issues and quantify impacts to present to others at varying levels of the organization.</li>\n</ul>\n<ul>\n<li>Experienced in system administration with professional skills in Linux on distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Diagnosed or designed, implemented, and created solutions with the Elastic Stack.</li>\n</ul>\n<ul>\n<li>Thrived in a self-organizing and sharing in a globally distributed team environment.</li>\n</ul>\n<ul>\n<li>Strengthened team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>\n</ul>\n<p>Compensation:</p>\n<ul>\n<li>This role is eligible to participate in Elastic&#39;s stock program.</li>\n</ul>\n<ul>\n<li>Total rewards package includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</li>\n</ul>\n<ul>\n<li>Typical starting salary range for this role is $154,800-$195,600 USD.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a438f945-411","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7794016","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$154,800-$195,600 USD","x-skills-required":["Software engineering","Public cloud","Managed Kubernetes services","Infrastructure-as-Code tooling","Containerized services","System administration","Linux on distributed systems"],"x-skills-preferred":["Golang","Crossplane","Terraform","Docker","Elastic Stack","Graphite","Prometheus","Influx"],"datePosted":"2026-04-18T15:53:14.287Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software engineering, Public cloud, Managed Kubernetes services, Infrastructure-as-Code tooling, Containerized services, System administration, Linux on distributed systems, Golang, Crossplane, Terraform, Docker, Elastic Stack, Graphite, Prometheus, Influx","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":154800,"maxValue":195600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b0e99a49-d99"},"title":"Senior Engineering Manager - Infrastructure","description":"<p>About Us</p>\n<p>We&#39;re looking for an Infrastructure Senior Engineering Manager to help us build a seamless, reliable platform for the dbt platform across AWS, Azure, and GCP.</p>\n<p>Our team&#39;s mission is to create a seamless developer experience by providing a stable, observable, and easy-to-use infrastructure platform. Over the past year, we&#39;ve designed and operationalized a next-gen cell-based architecture, scaling the dbt platform across all three cloud providers. Now, we&#39;re focused on automation, self-service, and improving developer velocity through better tooling, processes, and infrastructure design.</p>\n<p>As a Senior Engineering Manager, you&#39;ll lead your team on infrastructure projects to refine our platform while ensuring performance, reliability, and an excellent developer experience. You&#39;ll collaborate across teams, tackle real infrastructure challenges, and help shape the future of the multi-cloud dbt platform.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, lead, and coach a team of 8-12 engineers to manage the infrastructure for the dbt platform and report to the Director of Infrastructure</li>\n<li>Empower your team to achieve big goals by giving them product and business context and supporting team ownership of the roadmap, product development lifecycle, and technical excellence</li>\n<li>Dive deep into our product to frame tradeoffs and make decisions about what, how, and when we build</li>\n<li>Partner with Product Marketing, Solutions Architecture, and Customer Support to build delightful migration experiences, helping our customers seamlessly move off legacy deployments</li>\n<li>Coach engineers in product thinking, quality, and software engineering. Build individualized growth plans and match interests and capabilities to team goals</li>\n<li>Work with peer managers to evolve organizational processes like product training, technical decision making, project execution, and planning</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years in people management with a software or infrastructure engineering team</li>\n<li>Experience managing senior individual contributors (Staff+ level)</li>\n<li>Experience supporting a cloud-based infrastructure with complex resource requirements and global deployment strategy</li>\n<li>Deep understanding of Terraform and cloud infrastructure state management</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience leading teams through all parts of the product development lifecycle</li>\n<li>Have successfully partnered across teams and departments to coordinate cross-cutting initiatives</li>\n<li>You are interested in our mission and values. You are inspired to drive progress in the data and analytics ecosystem</li>\n</ul>\n<p><strong>Compensation &amp; Benefits</strong></p>\n<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>\n<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>\n<p>The typical starting salary range for this role is: $223,000 - $270,000 USD</p>\n<p>The typical starting salary range for this role in the select locations listed is: $248,000 - $300,000 US</p>\n<p>Equity Stake Benefits</p>\n<ul>\n<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>\n</ul>\n<p><strong>Our Hiring Process</strong></p>\n<ul>\n<li>Interview with a Talent Acquisition Partner (30 Mins)</li>\n<li>Technical Interview with Hiring Manager (60 Mins)</li>\n<li>Team Interviews ( 3 rounds, 45 Mins each)</li>\n<li>Final Values Interview (30 Mins)</li>\n</ul>\n<p>If you’re passionate about building well-designed, high-impact software, we’d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b0e99a49-d99","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4686309005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$223,000 - $270,000 USD","x-skills-required":["Terraform","Cloud infrastructure state management","People management","Software engineering","Infrastructure engineering"],"x-skills-preferred":["Product development lifecycle","Technical decision making","Project execution","Process improvement"],"datePosted":"2026-04-18T15:53:12.296Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, Cloud infrastructure state management, People management, Software engineering, Infrastructure engineering, Product development lifecycle, Technical decision making, Project execution, Process improvement","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":223000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b909bcec-9b4"},"title":"Senior Product Manager, Security & Infra","description":"<p>We are seeking a Senior Product Manager, Security &amp; Infra to own the roadmap and delivery of CoreWeave&#39;s workforce security and identity platforms across IT Infrastructure and Operations.</p>\n<p>In this role, you will focus on identity and access management, secure collaboration, endpoint/VDI, and SaaS governance. You will partner closely with IT Engineering, Security, People, and Compliance to make access secure by default, reduce friction in onboarding and offboarding, and improve the overall employee experience.</p>\n<p>About the role:</p>\n<p>You will translate requirements from Security, Compliance, IT, and business stakeholders into clear product outcomes across platforms such as Okta, Opal, VDI, MDM, Google Workspace, Slack, Zoom, and other SaaS applications. You will help convert the existing Staff IT Systems Engineer responsibilities into scalable, measurable product capabilities that align with the IT Infra and Ops charter.</p>\n<p>Own the product vision, strategy, and roadmap for IAM and security-aligned IT infrastructure, including Okta, access governance, endpoint management, and VDI, aligned to IT Infra and Ops priorities.</p>\n<p>Act as a primary product partner and escalation point for Engineering and Security on identity, access, and secure remote access, helping prioritize and resolve high-impact issues.</p>\n<p>Define and improve end-to-end joiner, mover, and leaver flows, ensuring user provisioning, de-provisioning, and access changes are secure, automated, and reliable.</p>\n<p>Establish standards for SSO integrations and app onboarding and offboarding in Okta, working with engineers to onboard new applications, deprecate legacy patterns, and maintain a consistent SSO experience.</p>\n<p>Design and roll out access policies, baseline RBAC, and Just-in-Time and time-bound access patterns in partnership with Security, Compliance, and IT Engineering, with clear controls and auditability.</p>\n<p>Own the VDI and secure remote access experience as a product surface, partnering with engineering on performance, reliability, and hardening, and defining SLIs, SLOs, and incident playbooks.</p>\n<p>Collaborate with Security, People, and IT to define automated onboarding and offboarding processes that cover identity, devices, collaboration tools, and privileged access.</p>\n<p>Define requirements for automation, policies, and scripts using vendor APIs to manage company devices and SaaS services such as Google Workspace, Slack, Zoom, and MDM tools.</p>\n<p>Partner with IT Infra and Ops and Finance to support asset and SaaS lifecycle management, including inventory visibility, licensing utilization, and access governance for endpoints and applications.</p>\n<p>Ensure documentation, runbooks, and training material for IAM, VDI, and related security and IT processes are accurate, accessible, and kept up to date for internal employees and IT staff.</p>\n<p>Build strong relationships with IT Engineering, Security, Compliance, People, and business stakeholders, with clear intake, prioritization, and communication around roadmap, tradeoffs, and delivery.</p>\n<p>Define and track key metrics such as time-to-access for new hires, reduction in access-related tickets, SSO coverage, QAR completion, and control failures, and use these metrics to drive prioritization.</p>\n<p>Who You Are:</p>\n<p>7+ years of experience in IT product management, technical program management, or systems engineering roles focused on SaaS, identity, security, or endpoint platforms.</p>\n<p>Practical experience with administration or close partnership on platforms such as Okta (Identity Engine and Identity Governance), Google Workspace, Slack, Zoom, MDM or endpoint management tools, and VDI solutions.</p>\n<p>Strong understanding of identity and access management concepts and practices, including SSO, federation, RBAC, joiner and leaver lifecycle, privileged access, and quarterly access reviews.</p>\n<p>Experience collaborating with or supporting Microsoft, Linux, and Mac user environments, especially for secure access, device management, and productivity tooling.</p>\n<p>Demonstrated experience defining and driving automation workflows and integrations that remove manual IT and security tasks, such as access provisioning, group management, and device or SaaS lifecycle actions.</p>\n<p>Familiarity with CI or CD tools and Git, and the ability to work closely with engineers on configuration, releases, and environment management.</p>\n<p>Comfort working with at least one programming or scripting language, such as Python or Go, at a level sufficient to understand implementation options and tradeoffs.</p>\n<p>Experience partnering with Security and Compliance on access controls, ITGCs, and IT application controls for frameworks such as SOX or SOC 2, including control design and evidence collection.</p>\n<p>Strong written and verbal communication skills, with the ability to translate technical and security concepts into clear product narratives and decisions for both technical and non-technical audiences.</p>\n<p>Preferred:</p>\n<p>Experience in high-growth, cloud-native, or security-sensitive environments where identity, access, and endpoint posture are tightly linked to customer and compliance expectations.</p>\n<p>Prior ownership of IAM or security products such as Okta, Opal, CyberArk, VDI, or device management platforms as a product manager or TPM.</p>\n<p>Familiarity with automation and infrastructure-as-code tooling such as Terraform, Ansible, Chef, Intune, Jamf, or similar technologies.</p>\n<p>Experience contributing to or leading M&amp;A-related migrations involving identity consolidation, SSO rationalization, or access model redesign.</p>\n<p>Exposure to AI or agent-based IT support models and interest in using them to reduce IT ticket volume and improve time to resolution.</p>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<p>Be Curious at Your Core</p>\n<p>Act Like an Owner</p>\n<p>Empower Employees</p>\n<p>Deliver Best-in-Class Client Experiences</p>\n<p>Achieve More Together</p>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for takeoff, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $143,000 to $210,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and market conditions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b909bcec-9b4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4652655006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["Okta","Google Workspace","Slack","Zoom","MDM","endpoint management tools","VDI solutions","CI or CD tools","Git","Python","Go","SOX","SOC 2","access controls","ITGCs","IT application controls"],"x-skills-preferred":["Terraform","Ansible","Chef","Intune","Jamf","CyberArk","Opal","device management platforms"],"datePosted":"2026-04-18T15:53:00.682Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Okta, Google Workspace, Slack, Zoom, MDM, endpoint management tools, VDI solutions, CI or CD tools, Git, Python, Go, SOX, SOC 2, access controls, ITGCs, IT application controls, Terraform, Ansible, Chef, Intune, Jamf, CyberArk, Opal, device management platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f734b63-903"},"title":"Application Security and Performance Consultant","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We are looking for an Application Security and Performance Consultant to join our Professional Services team. As a Consultant, you will be responsible for the technical execution of solutions and guidance to our customers to get the most value possible from their Cloudflare investment.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices</li>\n<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer</li>\n<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams</li>\n<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process</li>\n<li>Provide guidance to the customer to configure their CPEs and integration points</li>\n<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in a customer facing position as a Customer Engineer, Security Engineer, Implementation Engineer, Onboarding Engineer, Consultant, Technical Support Engineer, Solutions Architect, and/or Systems Engineer</li>\n<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP</li>\n<li>Understanding of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs</li>\n<li>Demonstrated experience with a scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills</li>\n<li>Demonstrated experience with security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS and certificates</li>\n<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3</li>\n<li>Working experience with infrastructure as code tools like Terraform</li>\n<li>Strong experience with APIs</li>\n<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We are not just a highly ambitious, large-scale technology company. We are a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>GetType</p>\n<p>experienceLevel: senior employmentType: full-time workplaceType: hybrid category: Engineering industry: Technology salaryRange: requiredSkills: [TCP/IP, TLS, DNS, HTTP, DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Python, JavaScript, Bash, DNS, WAFs, Bot Management, Rate Limiting, (M)TLS and certificates, Speed, Latency, Caching, HTTP/3, TLSv1.3, Terraform, APIs] preferredSkills: []</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f734b63-903","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/6366748","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["TCP/IP","TLS","DNS","HTTP","DDoS","Web Application Firewall","Certificates","CDN","Analytics and Logs","Python","JavaScript","Bash","WAFs","Bot Management","Rate Limiting","(M)TLS and certificates","Speed","Latency","Caching","HTTP/3","TLSv1.3","Terraform","APIs"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:57.844Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Distributed; Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TCP/IP, TLS, DNS, HTTP, DDoS, Web Application Firewall, Certificates, CDN, Analytics and Logs, Python, JavaScript, Bash, WAFs, Bot Management, Rate Limiting, (M)TLS and certificates, Speed, Latency, Caching, HTTP/3, TLSv1.3, Terraform, APIs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1aad838f-387"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>\n<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>\n<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>\n<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>\n</ul>\n<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p>To be successful in this role, you&#39;ll need:</p>\n<ul>\n<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>\n<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>\n<li>Deep experience with at least one of:</li>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>\n<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>\n<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>\n<li>Experience working in fintech, financial services, or highly regulated environments.</li>\n<li>Security engineering background with focus on data protection and access controls.</li>\n</ul>\n<p>Technologies We Use:</p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>\n<li>Storage: GCS, S3.</li>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>\n<li>Languages: Python, Go, SQL.</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1aad838f-387","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures"],"x-skills-preferred":["data warehousing","ETL/ELT pipelines","analytics infrastructure","data reliability","availability","cost efficiency","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"datePosted":"2026-04-18T15:52:47.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_78ab6fa5-133"},"title":"Staff Security Engineer, Defensive Cyber Engineering","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p>Join Okta’s Defensive Cyber Engineering team as a Staff Engineer responsible for safeguarding Okta’s environments. You’ll work closely with the Security, Business Technology Engineering and Product teams to implement and manage security solutions and ensure that core infrastructure applications are protecting our workforce, endpoints, and corporate data.</p>\n<p>A strong desire to make tools and people work together to solve complex security problems is central to this role. This approach mandates an engineering-first approach: maximising the utility of existing security tools before strategically building or buying new solutions to address any remaining security gaps.</p>\n<p>To execute this vision, you will combine your enterprise security expertise with your hands-on engineering skills, leveraging automation, policy-as-code, and cloud-native technologies to deliver scalable, resilient, and secure solutions. Your work will ultimately set standards for security best practices across the organisation and influence the architecture of business-critical systems.</p>\n<p>What you bring:</p>\n<ul>\n<li>Hands on experience with enterprise security tools such as Okta, Crowdstrike and Palo Alto suite covering EDR (Endpoint Detection and Response), CASB (Cloud Access Security Broker), DLP (Data Loss Prevention), MDM (Mobile Device Management), SASE (Secure Access Service Edge), and SSPM (SaaS Secure Posture Management) capabilities.</li>\n</ul>\n<ul>\n<li>Strong coding and scripting skills are required for building automation and custom tooling. Python experience is preferred, but proficiency in other languages (e.g., Bash, PowerShell, Go) is a plus.</li>\n</ul>\n<ul>\n<li>Proven track record automating security controls and workflows using a cloud-first approach</li>\n</ul>\n<ul>\n<li>Experience with Terraform and other infrastructure-as-code tools to orchestrate security infrastructure</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD pipelines for security automation and drift management</li>\n</ul>\n<ul>\n<li>Strong communication skills across technical staff, support teams, executive leadership, and external vendors.</li>\n</ul>\n<p>What you’ll be doing:</p>\n<ul>\n<li>Serve as a security subject matter expert (SME) for solution engineering, architecture reviews, security assessment, and vulnerability mitigation</li>\n</ul>\n<ul>\n<li>Lead technical efforts evaluating, designing, and implementing new enterprise security systems and feature enhancements</li>\n</ul>\n<ul>\n<li>Build, maintain, and enhance custom automation and cloud infrastructure using Terraform or similar tools to support team workflows and the enforcement of security controls</li>\n</ul>\n<ul>\n<li>Develop integrations with APIs, cloud platforms (AWS, GCP, Azure), and security infrastructure to improve detection, response, and remediation</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams to tackle global technology and security challenges</li>\n</ul>\n<ul>\n<li>Write and maintain scripts and automation to streamline security operations, with an emphasis on Python-based solutions</li>\n</ul>\n<ul>\n<li>Establish monitoring and alerting for security posture, misconfigurations, and threats across endpoints, SaaS, and cloud workloads</li>\n</ul>\n<ul>\n<li>Proactively identify and remediate security gaps; stay updated on emerging threats, solutions, and tooling across the industry</li>\n</ul>\n<p>And extra credit if you have experience in any of the following!</p>\n<ul>\n<li>Working with advanced identity management technologies (MFA, SAML, OAuth, OIDC, WebAuthn)</li>\n</ul>\n<ul>\n<li>Deep understanding of Okta&#39;s ecosystem, including advanced configuration and integrations</li>\n</ul>\n<ul>\n<li>Experience with continuous compliance solutions (e.g., policy-as-code, automated evidence gathering)</li>\n</ul>\n<p>What you can look forward to as an Full-Time Okta employee!</p>\n<p>World-class benefits, flexibility, and growth opportunities</p>\n<p>The chance to shape the security posture of a global leader in identity</p>\n<p>Opportunities to make a social Impact through technology and innovation</p>\n<p>Ready to join Okta and make security the foundation of our innovation? Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_78ab6fa5-133","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7476261","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$141,000-$211,000 CAD","x-skills-required":["Enterprise security tools","Okta","Crowdstrike","Palo Alto suite","EDR","CASB","DLP","MDM","SASE","SSPM","Python","Bash","PowerShell","Go","Terraform","Infrastructure-as-code tools","CI/CD pipelines","Security automation","Drift management"],"x-skills-preferred":["Advanced identity management technologies","Okta's ecosystem","Continuous compliance solutions"],"datePosted":"2026-04-18T15:52:38.855Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada; Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Enterprise security tools, Okta, Crowdstrike, Palo Alto suite, EDR, CASB, DLP, MDM, SASE, SSPM, Python, Bash, PowerShell, Go, Terraform, Infrastructure-as-code tools, CI/CD pipelines, Security automation, Drift management, Advanced identity management technologies, Okta's ecosystem, Continuous compliance solutions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141000,"maxValue":211000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62d39826-7f4"},"title":"Senior Product Acceleration Specialist","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>As a Senior Product Acceleration Specialist, you will play a critical role in driving Okta&#39;s success through the deployment and implementation of new products. You will serve as a trusted advisor to our Product and Go-To-Market (GTM) Field teams and customers, empowering them to unlock the full potential of our solutions.</p>\n<p>We are looking for an experienced, enthusiastic, and hands-on technical leader who has deep experience in the Okta platform and the broader Identity industry. You will be instrumental in driving the design, development, and optimisation of our products to deliver unparalleled customer experiences.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate closely with Okta stakeholders, including internal teams, customers, and external partners, to gather requirements, analyse needs, and document solutions.</li>\n<li>Provide clear and concise technical communication to stakeholders at various levels, including functional and technical requirements.</li>\n<li>Serve as a deep technical expert on Identity and Access Management, providing actionable recommendations to Product Management teams on integration concepts, feature gaps, and market opportunities.</li>\n<li>Design, build, and maintain the cutting-edge lab environment that powers all testing, solution validation, and high-impact product demonstrations.</li>\n<li>Deliver hands-on training and mentoring to customers and Okta Field teams on product features and functionality.</li>\n<li>Act as a bridge between customer needs and Product Management to inform product development and roadmap priorities.</li>\n<li>Lead cross-functional collaboration with internal teams such as Product Management, Engineering, Presales, Sales, Professional Services (PS), Enablement, and others to address complex issues and drive solutions.</li>\n<li>Engage with senior management and other stakeholders within Okta to drive strategic initiatives and partnerships.</li>\n<li>Conduct hands-on technical implementation, troubleshooting, and testing of products to ensure high-quality delivery.</li>\n<li>Manage multiple concurrent release programs simultaneously, prioritising tasks and resources effectively.</li>\n<li>Support a global, distributed team across multiple time zones by providing flexible working arrangements and regular communication.</li>\n<li>Mentor and guide less experienced team members, sharing expertise and best practices to drive knowledge sharing and growth within the team.</li>\n<li>Advocate for product adoption and usage through thought leadership content creation (e.g. blogs, videos, articles) and community engagement (e.g. forums).</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Highly motivated, experienced, and self-driven professional with a strong background in cyber security and identity and access management, looking for a technical role with a focus on delivering new product innovations to external customers.</li>\n<li>Strong background in identity and access management with expertise in various Okta product offerings</li>\n<li>Proven experience in cyber security, including Privileged Access Management (PAM), Identity Governance and Administration (IGA), Identity Threat Detection and Response (ITDR), and Security Posture Management (SPM)</li>\n<li>Experience with cyber security frameworks and standards (e.g., NIST, PCI-DSS)</li>\n<li>Knowledge of enterprise web technologies, cloud architectures, and complex IT landscapes</li>\n<li>Expert-level experience with Microsoft Active Directory, including Certificate Services (AD CS) and Federated Services (ADFS).</li>\n<li>Strong understanding of identity federation and user management protocols; SAML 2.0, WS-Federation, OAuth, OpenID Connect, SCIM</li>\n<li>Expertise in implementing robust access control models using RBAC, ABAC, IBAC, GBAC, and SOD</li>\n<li>Proficiency in governance frameworks such as HIPAA, PCI-DSS, or GDPR</li>\n<li>Experience installing, configuring, and managing server and desktop operating systems (Windows, Linux, macOS).</li>\n<li>Proven ability to automate infrastructure using scripting (e.g., PowerShell, Python) or IaC tools (e.g., Terraform).</li>\n<li>Experience with modern Endpoint Management systems (e.g., Microsoft Intune, Jamf, VMware Workspace One).</li>\n<li>Strong foundation in networking (on-prem and cloud) and experience managing virtualized environments</li>\n</ul>\n<p>Experience Level: Senior Employment Type: Full-time Workplace Type: Hybrid Category: Engineering Industry: Technology Salary Range: Not specified Required Skills:</p>\n<ul>\n<li>Identity and Access Management</li>\n<li>Cyber Security</li>\n<li>Microsoft Active Directory</li>\n<li>SAML 2.0</li>\n<li>WS-Federation</li>\n<li>OAuth</li>\n<li>OpenID Connect</li>\n<li>SCIM</li>\n<li>RBAC</li>\n<li>ABAC</li>\n<li>IBAC</li>\n<li>GBAC</li>\n<li>SOD</li>\n<li>HIPAA</li>\n<li>PCI-DSS</li>\n<li>GDPR</li>\n<li>PowerShell</li>\n<li>Python</li>\n<li>Terraform</li>\n<li>Microsoft Intune</li>\n<li>Jamf</li>\n<li>VMware Workspace One</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Cloud Architectures</li>\n<li>Complex IT Landscapes</li>\n<li>Enterprise Web Technologies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62d39826-7f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7557879","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Identity and Access Management","Cyber Security","Microsoft Active Directory","SAML 2.0","WS-Federation","OAuth","OpenID Connect","SCIM","RBAC","ABAC","IBAC","GBAC","SOD","HIPAA","PCI-DSS","GDPR","PowerShell","Python","Terraform","Microsoft Intune","Jamf","VMware Workspace One"],"x-skills-preferred":["Cloud Architectures","Complex IT Landscapes","Enterprise Web Technologies"],"datePosted":"2026-04-18T15:52:25.738Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"engineering","industry":"technology","skills":"Identity and Access Management, Cyber Security, Microsoft Active Directory, SAML 2.0, WS-Federation, OAuth, OpenID Connect, SCIM, RBAC, ABAC, IBAC, GBAC, SOD, HIPAA, PCI-DSS, GDPR, PowerShell, Python, Terraform, Microsoft Intune, Jamf, VMware Workspace One, Cloud Architectures, Complex IT Landscapes, Enterprise Web Technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_766c8fe9-693"},"title":"IT Operations Specialist","description":"<p>CoreWeave is seeking an IT Operations Specialist to play a key role in supporting and scaling its internal IT environment. As an IT Operations Specialist, you will blend hands-on end-user and systems support with automation, platform ownership, and process improvement. You will work daily across identity, endpoints, SaaS platforms, and office infrastructure, while contributing to repeatable, scalable solutions that support a growing, distributed workforce.</p>\n<p>This role requires strong technical depth, sound operational judgment, and comfort operating in a fast-moving environment. You will work closely with Security, Systems Engineering, People Ops, and Engineering to support the full employee lifecycle while continuously improving reliability, automation, and operational maturity.</p>\n<p>Key responsibilities include administering identity and access management platforms, managing macOS and Windows endpoints, administering ITSM platforms, and troubleshooting across SaaS, endpoint, identity, and network layers. You will also create and maintain technical documentation for systems and operational procedures.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, ability to participate in employee stock purchase program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible PTO, catered lunch each day in our office and data center locations, and a casual work environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_766c8fe9-693","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4664227006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$98,000 to $130,000","x-skills-required":["identity and access management platforms","macOS and Windows endpoints","ITSM platforms","troubleshooting across SaaS, endpoint, identity, and network layers","technical documentation for systems and operational procedures","scripting experience in Python, Bash, or PowerShell","familiarity with Terraform or other infrastructure-as-code tools for automation"],"x-skills-preferred":["Kubernetes-based or containerized environments","compliance frameworks such as SOC 2 or ISO 27001","integrating SaaS platforms via APIs or automation tooling","office network topology, hardware, and physical infrastructure","high-growth startup or scale-up environments"],"datePosted":"2026-04-18T15:52:14.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dallas, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"identity and access management platforms, macOS and Windows endpoints, ITSM platforms, troubleshooting across SaaS, endpoint, identity, and network layers, technical documentation for systems and operational procedures, scripting experience in Python, Bash, or PowerShell, familiarity with Terraform or other infrastructure-as-code tools for automation, Kubernetes-based or containerized environments, compliance frameworks such as SOC 2 or ISO 27001, integrating SaaS platforms via APIs or automation tooling, office network topology, hardware, and physical infrastructure, high-growth startup or scale-up environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":98000,"maxValue":130000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fa9a54d7-549"},"title":"Senior Site Reliability Engineer, Data Infrastructure","description":"<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>\n<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>\n<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>\n<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>\n<p>About the role:</p>\n<ul>\n<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>\n<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>\n<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>\n<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>\n<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>\n<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>\n<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>\n<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>\n<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>\n<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>\n<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>\n<li>Background in building internal developer platforms or self-service infrastructure</li>\n</ul>\n<p>Wondering if you’re a good fit?</p>\n<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>\n<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>\n<ul>\n<li>You love building highly reliable systems that operate at scale</li>\n<li>You’re curious about how to continuously improve system resilience, security, and operations</li>\n<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>\n<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>\n<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>\n<p>Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>\n<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>\n<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>\n<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information.</p>\n<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>\n<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>\n<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>\n<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>\n<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fa9a54d7-549","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4671535006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Kubernetes","containerized software services","cluster design","operations","troubleshooting","CI/CD systems","Argo CD","GitHub Actions","production systems","high availability","incident response","SLI/SLO/SLA definition","error budgets","postmortems","geo-replicated","multi-region","active-active systems","traffic routing","failover strategies","data consistency tradeoffs","observability components","metrics","logging","tracing","Prometheus","Grafana","OpenTelemetry","infrastructure as code","Helm","Terraform","Pulumi","automated environment provisioning","system performance tuning","capacity planning","resource optimization","distributed systems","security best practices","cloud-native environments","secrets management","network policies","vulnerability scanning"],"x-skills-preferred":["Spark","Airflow","Kafka","Flink","service mesh technologies","Istio","Linkerd","regulated environments","compliance frameworks","GDPR","SOC 2","HIPAA","SOX","internal developer platforms","self-service infrastructure"],"datePosted":"2026-04-18T15:51:59.035Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c69bbb7-4bb"},"title":"Intermediate Site Reliability Engineer, Environment Automation","description":"<p>Join the Dedicated team as a Site Reliability Engineer focused on Environment Automation, where your work will help power hundreds of isolated GitLab environments for our customers.</p>\n<p>In this role, you&#39;ll help keep these environments reliable, scalable, secure, and consistent by treating everything as code and contributing to automation across the entire lifecycle, from initial provisioning to day-to-day operations.</p>\n<p>Instead of operating a single platform, you&#39;ll collaborate with senior SREs to solve the unique challenges of managing many tenant environments in parallel, each with its own constraints and integration points.</p>\n<p>You&#39;ll help define, deploy, and maintain GitLab environments across cloud providers using infrastructure as code, deployment packages, and Kubernetes.</p>\n<p>Some examples of work you&#39;ll do:</p>\n<ul>\n<li>Contribute to the design and evolution of infrastructure automation using Terraform, Ansible, and Kubernetes to provision, upgrade, and operate many GitLab environments with minimal manual effort</li>\n</ul>\n<ul>\n<li>Help debug and resolve production issues across Kubernetes clusters, GitLab components, and cloud services, then assist in building automation and safeguards that prevent similar issues from recurring</li>\n</ul>\n<ul>\n<li>Assist in creating and maintaining deployment and orchestration tools, such as Helm Charts, omnibus-gitlab configurations, and multi-tenant workflows, that make it easy for teams to manage GitLab environments at scale</li>\n</ul>\n<p>You&#39;ll contribute to automating operational tasks across many GitLab environments, from initial provisioning and configuration updates to upgrades and routine maintenance, helping reduce manual work and improve reliability at scale under the guidance of senior team members.</p>\n<p>You&#39;ll help build and refine the observability stack for multi-tenant GitLab environments so we monitor the right signals across Kubernetes, cloud services, and GitLab applications, supporting early issue detection and basic capacity tracking.</p>\n<p>You&#39;ll assist in responding to platform alerts and incidents, collaborating with Environment Automation SREs and engineering teams to troubleshoot production issues across multiple tenants and document findings.</p>\n<p>You&#39;ll support planning and implementation of infrastructure changes, capacity expansions, and new service rollouts for Dedicated and other managed GitLab environments, contributing to efforts that improve resource efficiency and environment isolation.</p>\n<p>You&#39;ll develop and maintain scripts, automation tools, and infrastructure-as-code workflows that manage parts of the GitLab environment lifecycle, enabling more repeatable, self-service operations over time.</p>\n<p>You&#39;ll apply and help implement best practices for running GitLab on Kubernetes and cloud platforms, focusing on day-to-day reliability, performance, and security while learning how to keep environments consistent.</p>\n<p>You&#39;ll participate in the on-call rotation for production GitLab environments with appropriate support, helping triage and mitigate incidents across clusters and cloud providers and contributing to post-incident reviews.</p>\n<p>You&#39;ll document operational tasks, runbooks, and lessons learned so they become clear, repeatable processes and can be candidates for future automation, improving shared knowledge and reducing manual toil across the team.</p>\n<p>Experience working as an SRE or in a similar role operating production infrastructure, with an interest in automating the lifecycle of many environments or tenants in parallel, even if you have not yet done so at large scale.</p>\n<p>Hands-on experience with Golang (required) and the ability to read, understand, and modify infrastructure tools written in Go.</p>\n<p>Hands-on experience running Kubernetes-based workloads in production, including basic understanding of deployments, rollouts, and debugging common issues like crash loops, failed health checks, and scheduling problems.</p>\n<p>Familiarity with infrastructure automation and configuration management tools such as Terraform and Ansible, including experience working with modules, variables, and managing state safely for multiple environments.</p>\n<p>Solid understanding of Git-based workflows and infrastructure-as-code practices, with the ability to contribute to reusable modules, templates, and pipelines that make automation safer and more consistent.</p>\n<p>Experience working in distributed systems or cloud-based production environments, ideally in SaaS or managed service settings, with comfort participating in incident response and on-call rotations under guidance from more senior team members.</p>\n<p>A proactive mindset focused on automation and documentation,you look for opportunities to remove manual steps, improve runbooks, and turn repetitive tasks into reliable, self-service tools.</p>\n<p>Comfort working asynchronously across distributed teams and a desire to contribute to GitLab&#39;s values of collaboration, transparency, and iteration.</p>\n<p>About the team:</p>\n<p>We are responsible for building, running, and evolving the entire lifecycle of the GitLab environments that power the GitLab Dedicated platform.</p>\n<p>You&#39;ll be part of our team focused on owning the reliability, scalability, performance, and security of automated single-tenant GitLab instances and their supporting services.</p>\n<p>GitLab Dedicated provides fully managed, isolated environments for customers around the world, which means your work directly impacts how organizations of all sizes run their mission-critical software delivery on GitLab.</p>\n<p>We operate in a fully distributed, asynchronous environment across multiple regions, collaborating on everything from infrastructure automation and environment lifecycle design to incident response and capacity planning.</p>\n<p>You&#39;ll be solving novel challenges at scale, from orchestrating infrastructure-as-code workflows across hundreds of tenants to designing the automation that keeps those environments consistent, secure, and up to date.</p>\n<p>We continuously seek to reduce complexity and improve efficiency by leveraging cloud vendor managed products</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c69bbb7-4bb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8464417002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Kubernetes","Terraform","Ansible","Infrastructure as Code","Automation","Scripting","Cloud computing","Distributed systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:44.473Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Kubernetes, Terraform, Ansible, Infrastructure as Code, Automation, Scripting, Cloud computing, Distributed systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f94dea6d-70a"},"title":"Distributed Systems Engineer - Data Platform - Analytical Database Platform","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>About Role</p>\n<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>\n<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>\n<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>\n<ul>\n<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>\n<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>\n<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>\n<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>\n<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>\n</ul>\n<p>Key qualifications:</p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems, and databases.</li>\n<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>\n<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>\n<li>Experience with ClickHouse is a plus.</li>\n<li>Experience with SALT or Terraform is a plus.</li>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f94dea6d-70a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/4886734","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems","databases","software development","Golang","python","C++","SQL","database design","optimization","performance tuning","algorithms","data structures","concurrency","ClickHouse","SALT","Terraform","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:34.743Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1125d83c-1eb"},"title":"Staff Software Engineer - Backend","description":"<p>As a Staff Software Engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>\n<p>This involves writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>You will be part of a team that builds highly technical products that fulfil real, important needs in the world. We constantly push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is critical to making customers successful on our platform.</p>\n<p>Our engineering teams build one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>\n<p>We run thousands of Kubernetes clusters across all regions and orchestrate millions of VMs on a daily basis.</p>\n<p>Competencies:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>10+ years of production level experience in one of: Java, Scala, C++, or similar language</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Experience in architecting, developing, deploying, and operating large scale distributed systems</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n<li>Good knowledge of SQL</li>\n<li>Experience with software security and systems that handle sensitive data</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1125d83c-1eb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6779233002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:07.479Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32334977-1bd"},"title":"Senior Infrastructure Engineer","description":"<p><strong>About Us</strong></p>\n<p>Descript is on a mission to make audio and video content creation and editing fast, easy, and accessible to all. We are building a cutting-edge media editor incorporating real time collaboration, ground-breaking UX, and cutting-edge AI.</p>\n<p><strong>Job Description</strong></p>\n<p>As a Senior Infrastructure Engineer, you will drive projects that let engineers better understand and improve the performance, availability, and quality of what they ship. You will be owning and improving the core production infrastructure and building blocks upon which other engineers depend.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop technical and business solutions that enable engineers to improve the quality and reliability of product features and systems that they build.</li>\n<li>Drive improvements to the reliability of our core infrastructure, such as production clusters, networking, databases, and observability systems.</li>\n<li>Champion best practices during reviews of code, technical designs, and launch plans.</li>\n<li>Own our incident management and fire drill processes.</li>\n<li>Work with engineering leadership to set goals and prioritize production reliability.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years experience in production/site-reliability engineering OR 5+ years of server-side software engineering with an interest in working on core infrastructure</li>\n<li>A solid understanding of at least two of: public cloud infrastructure, Linux systems administration, and DevOps tooling.</li>\n<li>Basic coding skills to work on automation and technical guardrails.</li>\n<li>Strong written and verbal communication skills, and the ability to collaborate with other functions</li>\n<li>Experience mentoring engineers, including code reviews, architecture discussions, and leadership skills</li>\n</ul>\n<p><strong>Nice to Have’s</strong></p>\n<ul>\n<li>Experience with:</li>\n</ul>\n<p>+ TypeScript   + Kubernetes   + Google Cloud Platform   + Terraform</p>\n<p>The base salary range for this role is $191K-$250K.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32334977-1bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Descript","sameAs":"https://descript.com/","logo":"https://logos.yubhub.co/descript.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/descript/jobs/7500000003","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191K-$250K","x-skills-required":["public cloud infrastructure","Linux systems administration","DevOps tooling","basic coding skills","strong written and verbal communication skills"],"x-skills-preferred":["TypeScript","Kubernetes","Google Cloud Platform","Terraform"],"datePosted":"2026-04-18T15:51:04.434Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, San Francisco, California, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"public cloud infrastructure, Linux systems administration, DevOps tooling, basic coding skills, strong written and verbal communication skills, TypeScript, Kubernetes, Google Cloud Platform, Terraform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba0a936c-9b5"},"title":"Partner Solution Architect (pre-sales)","description":"<p>We are looking for a Partner Solutions Architect to lead technical strategy and enablement for our ecosystem in the ANZ region. This is a hands-on builder role. You will be responsible for ensuring our partners are not only articulating Elastic&#39;s value but are technically capable of architecting, building, and validating complex solutions.</p>\n<p>As a Partner Solutions Architect, you will:</p>\n<ul>\n<li>Own Technical Engagement Plans (TEPs) for focus partners, establishing long-term technical roadmaps at the CTO and Practice Lead level.</li>\n<li>Guide partners through high-stakes Technical Validation cycles, ensuring Elastic solutions are built to best-practice standards.</li>\n<li>Lead &#39;one-to-many&#39; technical &#39;Build-a-thons&#39; and hands-on laboratory sessions that empower partner engineers to lead their own implementations.</li>\n<li>Build deep relationships with partner pre-sales teams to guide them through the &#39;how-to&#39; of complex Search AI, Observability, and Security architectures at the configuration level.</li>\n<li>Collaborate on &#39;design wins&#39; by developing repeatable technical blueprints.</li>\n</ul>\n<p>To be successful in this role, you will require:</p>\n<ul>\n<li>Direct, hands-on experience with the Elastic Stack (ELK) or similar distributed search/analytics technologies (e.g., OpenSearch, Solr, Splunk, Datadog).</li>\n<li>8+ years of experience in technical roles.</li>\n<li>Proven ability to design and build technical prototypes, ingest complex datasets, and optimize search/indexing performance.</li>\n<li>Hands-on experience with Kubernetes, Docker, and Infrastructure as Code (Terraform) on AWS, Azure, or GCP.</li>\n<li>3+ years in a partner-facing role, with a focus on building technical practices and enabling third-party engineering teams.</li>\n<li>The ability to translate deep technical capabilities into scalable partner-led solutions.</li>\n</ul>\n<p>If you are a motivated and experienced professional with a passion for technology and partnership development, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba0a936c-9b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7757097","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Elastic Stack (ELK)","OpenSearch","Solr","Splunk","Datadog","Kubernetes","Docker","Infrastructure as Code (Terraform)","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:00.609Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sydney, Australia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Elastic Stack (ELK), OpenSearch, Solr, Splunk, Datadog, Kubernetes, Docker, Infrastructure as Code (Terraform), AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c916726e-d71"},"title":"Principal Software Engineer (Networking) - Platform","description":"<p>As a Principal Software Engineer (Networking) - Platform, you will lead technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>\n<p>Collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>\n<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>\n<p>You will take an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>\n<p>You will collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>\n<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>\n<p>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. We want to hear about your customer-first approach in solving operational problems for both today and the future.</p>\n<p>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>\n<p>You have designed and built a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform</p>\n<p>You have built Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</p>\n<p>You have written product features or functions in Golang or other programming languages.</p>\n<p>You have worked with containerized services (such as Docker).</p>\n<p>You have proven results in leading and improving cross-team engineering initiatives.</p>\n<p>You have experience in system administration with professional skills in Linux on distributed systems at scale.</p>\n<p>You have diagnosed or designed, implemented and created solutions with the Elastic Stack.</p>\n<p>You are experienced in a self-organizing and sharing in a globally distributed team environment.</p>\n<p>You strengthen team members in bringing out the best of each other by uplifting others with coaching and mentoring.</p>\n<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $189,800-$232,900 USD. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>\n<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c916726e-d71","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7565185","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$189,800-$232,900 USD","x-skills-required":["Software Engineering","Cloud Network Solutions","Public Cloud","Go","Managed Kubernetes Services","Linux","Distributed Systems","Elastic Stack","Infrastructure-as-Code","Crossplane","Terraform","Kubernetes","Containerized Services","Docker","System Administration","Golang","Programming Languages"],"x-skills-preferred":["SaaS Product Development","Kubernetes-at-Scale Infrastructure","Automation","Self-Organizing Team Environment","Coaching and Mentoring"],"datePosted":"2026-04-18T15:50:27.950Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Linux, Distributed Systems, Elastic Stack, Infrastructure-as-Code, Crossplane, Terraform, Kubernetes, Containerized Services, Docker, System Administration, Golang, Programming Languages, SaaS Product Development, Kubernetes-at-Scale Infrastructure, Automation, Self-Organizing Team Environment, Coaching and Mentoring","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189800,"maxValue":232900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b5ce114e-dac"},"title":"Cloud Engineer – Factory Systems and Operational Technology","description":"<p>Anduril Industries is a defence technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology and business model of the 21st century&#39;s most innovative companies to the defence industry, Anduril is changing how military systems are designed, built and sold.</p>\n<p>The company&#39;s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a real-time, 3D command and control centre.</p>\n<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion and networking technology to the military in months, not years.</p>\n<p>We are seeking a mission-driven Cloud Infrastructure Engineer to take a leading role in designing and implementing world-class defensive controls. This is a high-impact role with the autonomy to shape security architecture and protect the technology that is changing the future of defence.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design and Own Security Architecture: Architect, build and deploy robust, scalable security controls for our corporate, development and production cloud environments (AWS, Azure, GCP).</li>\n</ul>\n<ul>\n<li>Automate Everything: Develop and automate infrastructure-as-code (IaC) to manage and scale our cloud deployments securely and efficiently.</li>\n</ul>\n<ul>\n<li>Proactively Defend: Continuously monitor, identify and remediate security weaknesses and configuration drift across our entire cloud footprint.</li>\n</ul>\n<ul>\n<li>Be a Force Multiplier: Partner with infrastructure, application and product teams to embed security best practices into their workflows and secure environments holding mission-critical data.</li>\n</ul>\n<ul>\n<li>Enable Scale and Reliability: Engineer systems and processes that ensure our platforms are highly available, resilient and prepared for rapid growth.</li>\n</ul>\n<ul>\n<li>Serve as a Cloud Security Expert: Act as the go-to subject matter expert for teams across Anduril, providing guidance, mentorship and paved-road solutions for building securely in the cloud.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Proven experience building and securing complex cloud environments, typically gained through 3+ years in a Cloud Security, DevOps or SRE role.</li>\n</ul>\n<ul>\n<li>Deep proficiency in at least one major cloud provider (AWS, Azure or GCP).</li>\n</ul>\n<ul>\n<li>Strong hands-on experience with Infrastructure as Code (e.g., Terraform, CloudFormation, Bicep).</li>\n</ul>\n<ul>\n<li>Solid programming/scripting ability in one or more languages (e.g., Python, Go, Rust).</li>\n</ul>\n<ul>\n<li>Firm understanding of public cloud networking principles (e.g., VPCs, subnets, routing, security groups).</li>\n</ul>\n<ul>\n<li>Must be a U.S. Person and eligible to obtain and maintain a U.S. Top Secret security clearance.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience hardening and monitoring Kubernetes clusters (EKS, GKE, AKS).</li>\n</ul>\n<ul>\n<li>Experience with cloud security posture management (CSPM) or threat detection tooling.</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD pipelines and securing the software supply chain.</li>\n</ul>\n<ul>\n<li>Knowledge of compliance frameworks such as FedRAMP, MRL, SOC 2 or CMMC.</li>\n</ul>\n<ul>\n<li>On-premises network engineering experience.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b5ce114e-dac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5087348007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$129,000-$193,000 USD","x-skills-required":["Cloud Security","DevOps","SRE","Infrastructure as Code","Terraform","CloudFormation","Bicep","Python","Go","Rust","Public Cloud Networking","VPCs","Subnets","Routing","Security Groups"],"x-skills-preferred":["Kubernetes","Cloud Security Posture Management","Threat Detection Tooling","CI/CD Pipelines","Software Supply Chain Security","Compliance Frameworks","FedRAMP","MRL","SOC 2","CMMC","On-Premises Network Engineering"],"datePosted":"2026-04-18T15:49:59.253Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Security, DevOps, SRE, Infrastructure as Code, Terraform, CloudFormation, Bicep, Python, Go, Rust, Public Cloud Networking, VPCs, Subnets, Routing, Security Groups, Kubernetes, Cloud Security Posture Management, Threat Detection Tooling, CI/CD Pipelines, Software Supply Chain Security, Compliance Frameworks, FedRAMP, MRL, SOC 2, CMMC, On-Premises Network Engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":129000,"maxValue":193000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d34bbf18-2b2"},"title":"Senior Site Reliability Engineer (FinOps) - Platform","description":"<p>As a Senior Site Reliability Engineer (FinOps) - Platform, you will be part of the Platform Engineering department, responsible for designing, building, scaling, and maturing the multi-cloud platform for hosting internal and external services. You will lead technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will also grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Taking an engineering approach in leading technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>\n<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</li>\n<li>Using an inclusive approach at championing an environment focused on collaboration, operational excellence, and uplifting others.</li>\n<li>Responding to and preventing repeated customer impact in response to major incidents and prioritized problem management.</li>\n</ul>\n<p>The ideal candidate will have success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. They will have a background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions. An experience in public cloud and managed Kubernetes services is advantageous.</p>\n<p>The role requires passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>\n<p>Bonus points for experience in operating a SaaS product in a public cloud, building or operating a Kubernetes-at-scale infrastructure, writing non-trivial programs in Golang or other programming languages, working with containerized services, leading and improving alerting and major incident management standard processes metrics systems, and experience in system administration with professional skills in Linux on distributed systems at scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d34bbf18-2b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7565188","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud computing","Kubernetes","Golang","Containerization","Linux","System administration","Alerting and incident management"],"x-skills-preferred":["Infrastructure-as-Code","Terraform","Crossplane","Distributed systems","Self-organizing teams"],"datePosted":"2026-04-18T15:49:53.439Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Spain"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing, Kubernetes, Golang, Containerization, Linux, System administration, Alerting and incident management, Infrastructure-as-Code, Terraform, Crossplane, Distributed systems, Self-organizing teams"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c571d7f7-d82"},"title":"Engineering Manager - Storage","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. As an Engineering Manager, you will work with your team to build mission-critical Lakebase services on the Databricks Platform at scale.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Drive continuous delivery within a team of experts in storage technology, distributed systems and Rust.</li>\n<li>Manage the development and rollout of storage services that host millions of customer databases across dozens of regions</li>\n<li>Partner with peer engineering teams across Databricks to co-evolve Lakebase services with our global infrastructure.</li>\n<li>Lead operational excellence in 24/7 operation of our system</li>\n</ul>\n<p>The impact you will have:</p>\n<ul>\n<li>Hire great engineers to build an outstanding team.</li>\n<li>Support engineers in their career development by providing clear feedback and develop engineering leaders.</li>\n<li>Ensure high technical standards by instituting processes (architecture reviews, testing) and culture (engineering excellence).</li>\n<li>Work with engineering and product leadership to build a long-term roadmap.</li>\n<li>Coordinate execution and collaborate across teams to unblock cross-cutting projects.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience with building and shipping storage systems where correctness and performance are essential</li>\n<li>BS (or higher) in Computer Science, or a related field</li>\n<li>2+ years of experience building and leading a team of engineers working in a related system</li>\n<li>Experience with build, release and deployment infrastructure technologies such as Spinnaker, Jenkins, Airflow, Docker, Kubernetes, Terraform, Bazel, etc.</li>\n<li>Ability to attract, hire, and coach engineers who meet the Databricks hiring standards</li>\n<li>Comfort working on cross-functional projects with the ability to deeply understand product and customer personas</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c571d7f7-d82","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8476581002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["storage technology","distributed systems","Rust","Spinnaker","Jenkins","Airflow","Docker","Kubernetes","Terraform","Bazel"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:50.298Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"storage technology, distributed systems, Rust, Spinnaker, Jenkins, Airflow, Docker, Kubernetes, Terraform, Bazel"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_83aa996d-190"},"title":"Senior Software Engineer, Data Center Infrastructure Tooling","description":"<p>We&#39;re building one of the world&#39;s largest AI-focused cloud infrastructure platforms. As a senior backend engineer on this team, you&#39;ll help design, build, and own the data layer, APIs, and services that power our tools.</p>\n<p>The goal is to build bespoke software to model our infrastructure at both a physical and logical level to drive planning, coordination, automation, of some of the most advanced AI datacenters.</p>\n<p>You&#39;ll work closely with frontend engineers to bring rich user experiences built on top of your backends, and own how these services are deployed and run in production including scaling, redundancy and monitoring.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing and building data models and APIs that capture the complexity of datacenter infrastructure</li>\n<li>Creating high-throughput API services in Go (gRPC, GraphQL, and/or REST) that support the data density and interaction speed the frontend demands</li>\n<li>Building the backend architecture from the ground up, including service structure, data access patterns, caching strategy, and API contracts designed to scale with the team and product scope</li>\n<li>Integrating with internal/external systems and data sources that feed infrastructure planning, ensuring the platform reflects real-world state and planned builds accurately</li>\n<li>Deployment and operational infrastructure for the services you build, including Kubernetes manifests, CI/CD pipelines, observability, and reliability practices</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Strong proficiency in Go</li>\n<li>Deep experience with relational databases, specifically PostgreSQL and CockroachDB</li>\n<li>Experience designing and building APIs (gRPC, GraphQL, and REST) with attention to type safety, pagination, caching, filtering, and error handling</li>\n<li>Proven experience of performance optimization on the backend</li>\n<li>Familiarity with authentication, authorization, and backend security best practices for internal tooling</li>\n<li>Experience owning deployment and operations for the services you build</li>\n<li>Genuine curiosity about (or direct experience with) physical datacenter infrastructure</li>\n<li>Strong data modeling instincts</li>\n<li>Ability to work directly with infrastructure engineers to understand their workflows, identify pain points, and translate messy real-world processes into clean data models and APIs</li>\n</ul>\n<p>Nice to have includes direct experience with datacenter operations, infrastructure planning, or familiarity with DCIM tools like NetBox, Infrahub or Sunbird, experience with CockroachDB specifically, experience building systems that handle complex graph-like or hierarchical relational data, exposure to Infrastructure-as-Code, Terraform, or GitOps workflows, and experience with event-driven architectures, change data capture, or audit logging for compliance-sensitive systems.</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core, Act Like an Owner, Empower Employees, Deliver Best-in-Class Client Experiences, and Achieve More Together.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_83aa996d-190","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4658311006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Go","PostgreSQL","CockroachDB","API design","Performance optimization","Authentication","Authorization","Backend security","Deployment and operations"],"x-skills-preferred":["Datacenter operations","Infrastructure planning","DCIM tools","Complex graph-like or hierarchical relational data","Infrastructure-as-Code","Terraform","GitOps workflows","Event-driven architectures","Change data capture","Audit logging"],"datePosted":"2026-04-18T15:49:42.328Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, PostgreSQL, CockroachDB, API design, Performance optimization, Authentication, Authorization, Backend security, Deployment and operations, Datacenter operations, Infrastructure planning, DCIM tools, Complex graph-like or hierarchical relational data, Infrastructure-as-Code, Terraform, GitOps workflows, Event-driven architectures, Change data capture, Audit logging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1a7635f5-a02"},"title":"Principal Software Engineer (Networking) - Platform","description":"<p>As a Principal Software Engineer (Networking) - Platform, you will be part of the Platform Engineering department, responsible for crafting, building, and improving the multi-cloud platform at scale for Elastic Cloud Hosted and Serverless. You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Taking an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>\n<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling, and automations.</li>\n<li>Collaborating in an environment with an inclusive approach, and focusing on operational perfection which uplifts others.</li>\n<li>Preventing repeated customer impact in response to major incidents and prioritized problem management.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years in Software Engineering with product success in delivering Cloud network solutions.</li>\n<li>Experience in public cloud, Go, and managed Kubernetes services is advantageous.</li>\n<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>\n<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>\n</ul>\n<p>Bonus points include:</p>\n<ul>\n<li>Designing and building a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>\n<li>Building Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>\n<li>Writing product features or functions in Golang or other programming languages.</li>\n<li>Working with containerized services (such as Docker).</li>\n<li>Proven results in leading and improving cross-team engineering initiatives.</li>\n<li>Experience in system administration with professional skills in Linux on distributed systems at scale.</li>\n<li>Diagnosing or designing, implementing, and creating solutions with the Elastic Stack.</li>\n<li>Experienced in a self-organizing and sharing in a globally distributed team environment.</li>\n<li>Strengthening team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1a7635f5-a02","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7713597","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Software Engineering","Cloud Network Solutions","Public Cloud","Go","Managed Kubernetes Services","Infrastructure-as-Code","Crossplane","Terraform","Golang","Containerized Services","Docker","System Administration","Linux","Distributed Systems"],"x-skills-preferred":["Kubernetes","Automation","Inclusive Communication","Coaching and Mentoring"],"datePosted":"2026-04-18T15:49:20.809Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Spain"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Infrastructure-as-Code, Crossplane, Terraform, Golang, Containerized Services, Docker, System Administration, Linux, Distributed Systems, Kubernetes, Automation, Inclusive Communication, Coaching and Mentoring"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0fef4970-adb"},"title":"Security Software Engineer - Crypto Services","description":"<p><strong>About the Job</strong></p>\n<p>We&#39;re seeking a Security Software Engineer with a specialization in crypto services and key management to develop novel security tooling for securing our suite of products. The ideal candidate can develop, test, and debug embedded software with mission-critical security responsibilities.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Design and develop cybersecurity tools for real-time embedded, embedded Linux, and Android systems.</li>\n<li>Design and develop resilient software supporting all phases of key handling on embedded systems - from key load through sanitization.</li>\n<li>Develop thorough testing and qualification procedures for security-critical components.</li>\n<li>Collaborate with cross-functional teams to identify specific security needs and implement solutions.</li>\n<li>Conduct code reviews and ensure adherence to security best practices.</li>\n<li>Stay updated on the latest security threats and technologies.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>2+ years of software development experience in some combination of Golang, Rust, or C/C++.</li>\n<li>Experience selecting and utilizing embedded HSMs and Secure Elements.</li>\n<li>Experience with CI/CD and test automation, including for mobile and embedded devices.</li>\n<li>Experience debugging embedded systems using common test equipment - logic analyzers, oscilloscopes, etc.</li>\n<li>Solid understanding of cybersecurity principles and practices.</li>\n<li>Ability to obtain and hold a U.S. Secret security clearance.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Knowledge of security frameworks and compliance standards.</li>\n<li>Experience in mobile development, specifically on Android platforms.</li>\n<li>Familiarity with cloud infrastructure management (Terraform and/or AWS CDK).</li>\n<li>Experience implementing solutions compliant with US Government key handling requirements.</li>\n<li>Strong problem-solving and analytical skills.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>The salary range for this role is $126,000-$191,000 USD. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, family planning and parenting support, mental health resources, professional development, commuter benefits, relocation assistance, and a retirement savings plan.</p>\n<p><strong>Protecting Yourself from Recruitment Scams</strong></p>\n<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>\n<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>\n<ul>\n<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>\n<li>Please always verify communications:</li>\n<li>Direct from Anduril: If you receive an email from one of our recruiters, it will only come from an @anduril.com address.</li>\n<li>Via Agency Partner: If contacted by a recruiting agency for an Anduril role, their email will clearly identify their agency. If you suspect any suspicious activity, please verify the agency&#39;s authenticity by reaching out to contact@anduril.com.</li>\n<li>Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not engage. Always confirm the sender&#39;s email domain is @anduril.com before providing any personal information or clicking on links.</li>\n<li>What to Do If You Suspect Fraud: Should you encounter any questionable or fraudulent outreach claiming to be from Anduril, please report it immediately to contact@anduril.com. Your proactive caution is invaluable in protecting your personal information and upholding the security and trustworthiness of our recruitment efforts.</li>\n</ul>\n<p><strong>Data Privacy</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0fef4970-adb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5086919007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$126,000-$191,000 USD","x-skills-required":["Golang","Rust","C/C++","Embedded HSMs","Secure Elements","CI/CD","Test Automation","Mobile Development","Android Platforms","Cloud Infrastructure Management","Terraform","AWS CDK","US Government Key Handling Requirements","Cybersecurity Principles","Security Best Practices"],"x-skills-preferred":["Security Frameworks","Compliance Standards","Problem-Solving","Analytical Skills","Communication","Teamwork"],"datePosted":"2026-04-18T15:49:06.117Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Rust, C/C++, Embedded HSMs, Secure Elements, CI/CD, Test Automation, Mobile Development, Android Platforms, Cloud Infrastructure Management, Terraform, AWS CDK, US Government Key Handling Requirements, Cybersecurity Principles, Security Best Practices, Security Frameworks, Compliance Standards, Problem-Solving, Analytical Skills, Communication, Teamwork","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":126000,"maxValue":191000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7fe95f3-dcf"},"title":"Site Reliability Engineer (SRE)","description":"<p>You will work on the team responsible for the backend services that power our products such as grok.com and the API. We focus on writing and maintaining highly scalable and reliable services that can efficiently process tens of thousands of queries per second. The services are hosted on a number of Kubernetes clusters (on-prem &amp; cloud).</p>\n<p>Our team is small, highly motivated, and focused on engineering excellence. We operate with a flat organisational structure. All employees are expected to be hands-on and to contribute directly to the company&#39;s mission. Leadership is given to those who show initiative and consistently deliver excellence.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Work on the team that is responsible for the backend services that power our products such as grok.com and the API.</li>\n<li>Write and maintain highly scalable and reliable services that can efficiently process tens of thousands of queries per second.</li>\n<li>Ensure the services are hosted on a number of Kubernetes clusters (on-prem &amp; cloud).</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Expert knowledge of Kubernetes.</li>\n<li>Expert knowledge of continuous deployment systems such as Buildkite and ArgoCD.</li>\n<li>Expert knowledge of monitoring technologies such as Prometheus, Grafana, and PagerDuty.</li>\n<li>Expert knowledge of infrastructure as code technologies such as Pulumi or Terraform.</li>\n<li>Familiarity with a systems programming language like Rust, C++ or Go.</li>\n<li>Experience with traffic management and HTTP proxies such as nginx and envoy.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7fe95f3-dcf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://xai.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4681662007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Buildkite","ArgoCD","Prometheus","Grafana","PagerDuty","Pulumi","Terraform","Rust","C++","Go","nginx","envoy"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:59.475Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Buildkite, ArgoCD, Prometheus, Grafana, PagerDuty, Pulumi, Terraform, Rust, C++, Go, nginx, envoy"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eef55d3d-bf0"},"title":"Cloud Deployment Engineer, Space","description":"<p>Job Title: Cloud Deployment Engineer, Space</p>\n<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>\n<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>\n<p><strong>ABOUT THE JOB</strong></p>\n<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>\n<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>\n<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>\n<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>\n<p><strong>REQUIRED QUALIFICATIONS</strong></p>\n<ul>\n<li>5+ years of working experience in DevOps or SRE type roles</li>\n<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>\n<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>\n<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>\n<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>\n<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>\n<li>Strong problem-solving skills and the ability to work well under pressure</li>\n<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>\n<li>Experience deploying complex and scalable infrastructure solutions</li>\n<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>\n<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>\n<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>\n</ul>\n<p><strong>PREFERRED QUALIFICATIONS</strong></p>\n<ul>\n<li>Extensive expertise in Kubernetes and Helm</li>\n<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>\n<li>Cisco Certified Network Associate (CCNA)</li>\n<li>Experience with government Cyber certification processes</li>\n<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>\n<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>\n<li>Military service background (particularly with Space experience)</li>\n</ul>\n<p>US Salary Range $129,000-$171,000 USD</p>\n<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>\n<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>\n<ul>\n<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>\n<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>\n<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>\n<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>\n<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>\n<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>\n<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>\n<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>\n<li>Additional work-life services, such as legal and financial support, are also available.</li>\n<li>Professional Development: Annual reimbursement for professional development.</li>\n<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>\n<li>Relocation Assistance: Available depending on role eligibility.</li>\n<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>\n<li>UK &amp; IE Roles: Pension plan with employer match.</li>\n<li>AUS Roles: Superannuation plan.</li>\n</ul>\n<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>\n<p><strong>Protecting Yourself from Recruitment Scams</strong></p>\n<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>\n<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>\n<ul>\n<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eef55d3d-bf0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$129,000-$171,000 USD","x-skills-required":["cloud services","AWS","Azure","Google Cloud Platform","IaC","Terraform","Cloudformation","Puppet","Ansible","containerization","Docker","Kubernetes","Helm","networking","TCP/IP","security best practices","scripting languages","Python","Go","Bash","Rust","problem-solving","communication","collaboration","infrastructure solutions","relevant certifications","AWS Certified Solutions Architect","Microsoft Certified Solutions Expert","Google Cloud Certified Professional","U.S. Secret security clearance","U.S. Top Secret security clearance"],"x-skills-preferred":["extensive expertise in Kubernetes and Helm","DoD 8570 IAT Level 1 or 2 certification","Cisco Certified Network Associate","government Cyber certification processes","installing","sustaining","troubleshooting","familiarity with DoD-managed network enclaves","military service background"],"datePosted":"2026-04-18T15:48:49.675Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":129000,"maxValue":171000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1e275c7d-4a3"},"title":"Staff Systems Engineer, Identity","description":"<p>As a Staff Systems Engineer, Identity, you will serve as the primary technical owner of CoreWeave&#39;s enterprise identity ecosystem, with a focus on Okta and Opal. You will design, build, and operate identity lifecycle systems that are secure, automated, and scalable.</p>\n<p>This is a highly visible, high-impact role where identity sits at the center of security. You will define how access is granted, changed, and removed across the organization, enabling business velocity while enforcing least privilege and strong governance.</p>\n<p>Key responsibilities include:</p>\n<p><strong>Design and scale enterprise identity architecture that minimizes access sprawl and enforces least privilege</strong></p>\n<p><strong>Own and improve Joiner, Mover, and Leaver (JML) lifecycle processes across all critical systems</strong></p>\n<p><strong>Build and operate identity governance and administration (IGA) capabilities including birthright access models, role-based access control (RBAC), approval workflows and policy enforcement, access reviews and certification processes</strong></p>\n<p><strong>Administer and enhance Okta capabilities (SSO, MFA, adaptive policies, lifecycle management, SCIM, integrations)</strong></p>\n<p><strong>Build and scale access request workflows in Opal and integrated systems</strong></p>\n<p><strong>Integrate new applications into the identity ecosystem (SAML, OIDC, SCIM, role mapping)</strong></p>\n<p><strong>Develop automation and infrastructure-as-code to improve reliability and reduce manual effort</strong></p>\n<p><strong>Partner with Security to strengthen identity as a core control plane (Zero Trust, authentication, authorization)</strong></p>\n<p><strong>Align identity systems with PeopleOps and organizational changes</strong></p>\n<p><strong>Monitor and improve identity system health, observability, and performance</strong></p>\n<p><strong>Troubleshoot complex authentication, provisioning, and authorization issues</strong></p>\n<p><strong>Maintain documentation, runbooks, and architectural standards</strong></p>\n<p><strong>Serve as an escalation point for identity-related incidents</strong></p>\n<p><strong>Drive continuous improvement in identity architecture, governance, and user experience</strong></p>\n<p>Requirements include:</p>\n<p><strong>7–10+ years of experience in IT systems engineering, identity engineering, or systems architecture</strong></p>\n<p><strong>Deep hands-on experience with Okta in a complex enterprise environment</strong></p>\n<p><strong>Strong expertise in identity and access concepts (SSO, MFA, SAML, OAuth, OIDC, SCIM, RBAC, Zero Trust)</strong></p>\n<p><strong>Proven experience designing lifecycle automation (JML) and access governance frameworks</strong></p>\n<p><strong>Experience with IGA or access request platforms such as Opal</strong></p>\n<p><strong>Strong automation and infrastructure-as-code experience (Terraform, APIs, Python/PowerShell/Golang)</strong></p>\n<p><strong>Ability to integrate enterprise applications into centralized identity platforms</strong></p>\n<p><strong>Strong troubleshooting skills across identity, federation, and provisioning systems</strong></p>\n<p><strong>Excellent communication skills with the ability to influence cross-functional stakeholders</strong></p>\n<p>Preferred qualifications include familiarity with Active Directory, Entra ID, HRIS systems, and SaaS ecosystems, experience building identity observability and reporting, and relevant certifications (Okta, cloud, or security).</p>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<p><strong>Be Curious at Your Core</strong></p>\n<p><strong>Act Like an Owner</strong></p>\n<p><strong>Empower Employees</strong></p>\n<p><strong>Deliver Best-in-Class Client Experiences</strong></p>\n<p><strong>Achieve More Together</strong></p>\n<p>Why This Role Matters</p>\n<p>Identity is one of the most critical control planes in a modern enterprise. In this role, you will define how secure access is managed across CoreWeave, ensuring identity remains a foundational pillar of security, compliance, and scale.</p>\n<p>The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, ability to participate in employee stock purchase program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1e275c7d-4a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4668575006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"Base salary range: $188,000 to $275,000","x-skills-required":["Okta","Opal","identity lifecycle systems","security","automation","infrastructure-as-code","Terraform","APIs","Python","PowerShell","Golang","identity and access concepts","SSO","MFA","SAML","OAuth","OIDC","SCIM","RBAC","Zero Trust","lifecycle automation","access governance frameworks","IGA","access request platforms","SaaS ecosystems","Active Directory","Entra ID","HRIS systems"],"x-skills-preferred":["identity observability and reporting","relevant certifications","cloud"],"datePosted":"2026-04-18T15:48:46.456Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA /Dallas, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Okta, Opal, identity lifecycle systems, security, automation, infrastructure-as-code, Terraform, APIs, Python, PowerShell, Golang, identity and access concepts, SSO, MFA, SAML, OAuth, OIDC, SCIM, RBAC, Zero Trust, lifecycle automation, access governance frameworks, IGA, access request platforms, SaaS ecosystems, Active Directory, Entra ID, HRIS systems, identity observability and reporting, relevant certifications, cloud","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9df2a54c-80c"},"title":"Professional Services Engineer - META","description":"<p>Job Title: Professional Services Engineer - META</p>\n<p>GitLab is seeking a Professional Services Engineer to join our team in the United Arab Emirates. As a Professional Services Engineer, you will engage with customers to provide installation, migration, training, and advisory services. You will handle installations ranging from single-node Omnibus installs to our largest reference architectures utilizing IaC/CaC, migrations from multiple systems to GitLab SaaS or self-hosted, and advisory services across the entire GitLab feature stack.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Use a consultative approach to customer engagements</li>\n<li>Deliver on SOW with guidance from technical architects</li>\n<li>Scope may include installation and configuration of GitLab solutions in the customer environment, providing technical training sessions remotely and/or on-site, providing documentation for implementation, guides, maintenance, etc relevant to the customer requirements</li>\n<li>Manage creation of new and/or maintenance of existing artifacts and templates for deliverables and training</li>\n<li>Develop and implement migration plans for customer VCS &amp; data migration</li>\n<li>Contribute to the extension and maintenance of documentation/scripts for implementation and workflow to align with custom requirements</li>\n<li>Document opportunities to help the customer achieve their vision more effectively and efficiently</li>\n<li>Communicate opportunities to the customer project and account team</li>\n<li>Support engagement managers on quoting and scoping of SOWs</li>\n<li>Document and implement improvements for Professional Services engagement processes</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Professional exposure with one or more IaC/CaC technologies: Terraform, Ansible, Packer, Puppet, Chef</li>\n<li>Professional exposure with one or more cloud providers: AWS, GCP, Azure</li>\n<li>Proficient in the English language, both written and verbal, sufficient for success in a remote and largely asynchronous work environment</li>\n<li>Experience using, deploying, or configuring GitLab</li>\n<li>Comfortable working in a fast-paced environment, sometimes with multiple customer engagements at once</li>\n<li>Positive disposition and solution-oriented mindset</li>\n<li>Effective communication skills: Regularly achieve consensus with peers, and provide clear status updates</li>\n<li>Self-motivated and self-managing, with strong organizational skills</li>\n<li>Shares GitLab values, and work in accordance with those values</li>\n<li>Ability to thrive in a fully remote organization</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Benefits to support your health, finances, and well-being</li>\n<li>Flexible Paid Time Off</li>\n<li>Team Member Resource Groups</li>\n<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>\n<li>Growth and Development Fund</li>\n<li>Parental leave</li>\n<li>Home office support</li>\n</ul>\n<p>Note: We welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9df2a54c-80c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8499907002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["IaC/CaC technologies","cloud providers","English language","GitLab","customer engagement","consultative approach","technical training","documentation","migration planning","workflow engineering"],"x-skills-preferred":["Terraform","Ansible","Packer","Puppet","Chef","AWS","GCP","Azure"],"datePosted":"2026-04-18T15:48:36.704Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, United Arab Emirates"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"IaC/CaC technologies, cloud providers, English language, GitLab, customer engagement, consultative approach, technical training, documentation, migration planning, workflow engineering, Terraform, Ansible, Packer, Puppet, Chef, AWS, GCP, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c73d22c6-873"},"title":"Senior Software Engineer, (Golang, K82 & CI- Build Services)","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI.</p>\n<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>\n<p>This is an opportunity to do career-defining work.</p>\n<p>We&#39;re all in on this mission.</p>\n<p>If you are too, let&#39;s talk.</p>\n<p><strong>What You&#39;ll Own:</strong></p>\n<ul>\n<li>Unified Build Architectures: Design and implement modular, reusable build stages that define how all code at Okta is tested, secured, and packaged.</li>\n<li>Systems Innovation: Solve deep scaling bottlenecks (e.g., Monorepo segmentation, dependency resolution) to accelerate thousands of developers.</li>\n<li>Infrastructure as Code: Own the delivery of highly available build agents and artifact registries using Golang, Terraform, and AWS.</li>\n<li>Engineering Excellence: Champion &#39;Build-it-once&#39; philosophies, creating self-healing systems that reduce operational toil and eliminate reactive support.</li>\n</ul>\n<p><strong>What We Are Looking For:</strong></p>\n<ul>\n<li>Experience: 6+ years in Platform or Infrastructure Engineering, specifically building large-scale CI/Build Platform.</li>\n<li>Expertise: Advanced proficiency in Golang for tooling and Terraform for infrastructure orchestration.</li>\n<li>Containerization: Mastery of Kubernetes (K8s) and container primitives for build execution.</li>\n<li>Scale Mindset: A proven track record of investigating distributed system failures and delivering performant solutions at scale.</li>\n<li>Ownership: You don&#39;t just write code; you own the reliability, cost-efficiency, and security guardrails of the entire ecosystem.</li>\n</ul>\n<p><strong>The Okta Experience</strong></p>\n<ul>\n<li>Supporting Your Well-Being</li>\n<li>Driving Social Impact</li>\n<li>Developing Talent and Fostering Connection + Community</li>\n</ul>\n<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate.</p>\n<p>Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>\n<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>\n<p>We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>\n<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process.</p>\n<p>In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>\n<p>Okta is committed to complying with applicable data privacy and security laws and regulations.</p>\n<p>For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Supporting Your Well-Being</li>\n<li>Driving Social Impact</li>\n<li>Developing Talent and Fostering Connection + Community</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c73d22c6-873","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7810108","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Terraform","Kubernetes","Container primitives","Infrastructure as Code"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:22.125Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Terraform, Kubernetes, Container primitives, Infrastructure as Code"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0787994a-b99"},"title":"Senior Cloud Deployment Engineer, Space","description":"<p>Anduril Industries is seeking a Senior Cloud Deployment Engineer to join their Space team. The successful candidate will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. They will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>\n<p>A significant part of the duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. The engineer will also be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>\n<p>The role requires 8+ years of working experience in DevOps or SRE type roles, with strong proficiency in utilizing cloud services like AWS, Azure, or Google Cloud Platform. Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc) and containerization technologies such as Docker and orchestration tools like Kubernetes and Helm is also required.</p>\n<p>The salary range for this role is $166,000-$220,000 USD per year, with highly competitive equity grants included in the majority of full-time offers. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, and family planning and parenting support.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0787994a-b99","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5032429007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["AWS","Azure","Google Cloud Platform","IaC","Kubernetes","Helm","Docker","Terraform","Cloudformation","Puppet","Ansible"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:16.791Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, Google Cloud Platform, IaC, Kubernetes, Helm, Docker, Terraform, Cloudformation, Puppet, Ansible","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b71a8e89-5f0"},"title":"Multinational Digital Infrastructure - Senior Cloud Engineer","description":"<p>Anduril Industries is seeking a Senior Cloud Engineer to join its Multinational Digital Infrastructure team. As a Senior Cloud Engineer, you will design and implement cloud environments that enable Anduril to effectively operate sovereign programmes in the U.K. and Australia, as well as expanding to other nations as Anduril&#39;s global presence increases.</p>\n<p>You will work across engineering, security, and product teams to ensure our digital infrastructure is secure, scalable, and ready to support emerging mission demands.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, deploy, and maintain enterprise cloud landing zones, security and infrastructure tooling.</li>\n<li>Collaborate with teams across the U.S. and Australia to enable secure connectivity between other sovereign cloud environments.</li>\n<li>Partner with government customers, authorizing officials (AOs), cybersecurity teams, and policy shops to accelerate accreditation, break through legacy barriers, and unlock access for cross-nation engineering teams.</li>\n<li>Implement infrastructure automation (IaC), observability tooling, and secure configuration baselines to support scalable, repeatable environment builds.</li>\n<li>Work closely with product, autonomy, Lattice, and Maritime engineering teams to integrate infrastructure capabilities with platform development, testing, and deployment workflows.</li>\n<li>Act as a technical leader during environment standup, troubleshooting, and validation events; ensure classified systems perform reliably in support of mission-critical needs.</li>\n<li>Support development of next-generation secure architectures for multinational development, data sharing, and mission system integration across Maritime platforms.</li>\n<li>Serve as a technical representative during customer events, exercises, and operational demonstrations to ensure infrastructure readiness and mission success.</li>\n</ul>\n<p>Required qualifications include:</p>\n<ul>\n<li>Ability to obtain and maintain a UK security clearance to SC level.</li>\n<li>Bachelor&#39;s degree in a STEM field or equivalent engineering experience.</li>\n<li>Technical depth in one or more areas, including cloud infrastructure, secure networking, systems engineering, DevSecOps, platform architecture, cybersecurity, identity &amp; access management.</li>\n<li>Specific technology includes: cloud - AWS, Azure; infrastructure as code - Terraform, CloudFormation; SCM - GitHub Enterprise; CI/CD - CircleCI, Gitlab; IDAM + SSO - Okta, AWS Identity Center.</li>\n<li>8+ years of relevant engineering, infrastructure, or technical program execution experience.</li>\n<li>Willingness to travel domestically and internationally as required.</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Experience with secure systems engineering, ideally within UK Government or Defence.</li>\n<li>Experience provisioning large enterprise cloud platforms for hundreds or thousands of users.</li>\n<li>Experience designing or maintaining distributed systems, secure networks, or infrastructure supporting autonomy, AI/ML, or big data workloads.</li>\n<li>Demonstrated ability to work across technical disciplines, influence without authority, and operate in ambiguous and fast-paced environments.</li>\n<li>Experience working with international partners or navigating multi-nation technical or policy workflows.</li>\n</ul>\n<p>The salary range for this role is competitive and includes highly competitive equity grants as part of Anduril&#39;s total compensation package.</p>\n<p>Additional benefits include:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision plans at little to no cost to you.</li>\n<li>Generous time off, including a holiday hiatus in December.</li>\n<li>Family planning &amp; parenting support, including coverage for fertility treatments and adoption.</li>\n<li>Mental health resources, including access to free therapy and life coaching.</li>\n<li>Professional development opportunities, including annual reimbursement for professional development.</li>\n<li>Commuter benefits and relocation assistance.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b71a8e89-5f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5039728007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud infrastructure","Secure networking","Systems engineering","DevSecOps","Platform architecture","Cybersecurity","Identity & access management","AWS","Azure","Terraform","CloudFormation","GitHub Enterprise","CircleCI","Gitlab","Okta","AWS Identity Center"],"x-skills-preferred":["Secure systems engineering","Provisioning large enterprise cloud platforms","Designing or maintaining distributed systems","Infrastructure supporting autonomy, AI/ML, or big data workloads"],"datePosted":"2026-04-18T15:48:12.977Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure, Secure networking, Systems engineering, DevSecOps, Platform architecture, Cybersecurity, Identity & access management, AWS, Azure, Terraform, CloudFormation, GitHub Enterprise, CircleCI, Gitlab, Okta, AWS Identity Center, Secure systems engineering, Provisioning large enterprise cloud platforms, Designing or maintaining distributed systems, Infrastructure supporting autonomy, AI/ML, or big data workloads"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2d870a29-495"},"title":"Senior Solutions Engineer, Auth0","description":"<p><strong>Secure Every Identity, from AI to Human</strong></p>\n<p>Auth0 secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p><strong>About Okta</strong></p>\n<p>Okta is the World’s Identity Company. We enable everyone to safely use any technology,anywhere, on any device or app. The Okta Platform and Auth0 Platform enable secure and flexible access, authentication, and automation, transforming people&#39;s experience in the digital world and putting Identity at the heart of business security and growth.</p>\n<p><strong>Job Description</strong></p>\n<p>As a member of the Okta Japan SE team, the Senior Solutions Engineer for the Auth0 Platform will serve as a key technical and business advisor to customers and partners. Leveraging a passion for technology and deep expertise, you will plan, propose, and demonstrate how Auth0 solutions solve critical business challenges for stakeholders ranging from field practitioners to C-level executives throughout the sales cycle. Consequently, you will directly contribute to achieving quarterly and annual sales targets in Japan.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Collaborate with the sales team and related departments to act as an authority on technology and the Identity Security domain, ensuring customers fully understand the value and potential of the Auth0 Platform.</li>\n</ul>\n<ul>\n<li>Communicate the value of implementing Auth0 solutions to customers in various positions (practitioners, managers, C-level) and earn their trust.</li>\n</ul>\n<ul>\n<li>Serve as a primary technical resource, confidently responding to product functionality questions and detailed technical inquiries from customers, partners, and internal Auth0 teams.</li>\n</ul>\n<ul>\n<li>Drive knowledge sharing by contributing to and utilizing best practices and reusable assets, proactively elevating the expertise and efficiency of the entire SE team.</li>\n</ul>\n<ul>\n<li>Support customer Proof of Concepts (PoCs) to technically demonstrate that implementing Auth0’s identity solutions will deliver value to the customer.</li>\n</ul>\n<ul>\n<li>Participate as a presenter and demonstrator in online or offline events aimed at evangelizing Auth0.</li>\n</ul>\n<ul>\n<li>Lead less experienced sales representatives and SEs, contributing to constructive performance improvement of the entire team through appropriate feedback.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience in customer-facing roles as a Pre-sales SE or Technical Sales.</li>\n</ul>\n<ul>\n<li>Ability to build and maintain strong relationships of trust with customer technical staff and leaders throughout the sales process.</li>\n</ul>\n<ul>\n<li>Native-level Japanese proficiency with excellent communication and reading comprehension skills.</li>\n</ul>\n<ul>\n<li>Ability to ask questions and express opinions in English via email, Slack, or one-on-one conversations.</li>\n</ul>\n<ul>\n<li>Joy in spreading industry-leading solutions that benefit society.</li>\n</ul>\n<ul>\n<li>Enjoyment in collaborating with Okta’s expanding partner community in Japan and working with Pre-sales Engineers around the world.</li>\n</ul>\n<ul>\n<li>A desire to become a top-tier Solutions Engineer by refining pre-sales skills, not just product and authentication technology knowledge.</li>\n</ul>\n<p>And extra credit if you have experience in any of the following!</p>\n<ul>\n<li>Pre-sales experience in an organisation using sales methodologies such as SPIN, Solution Selling, or MEDDICC.</li>\n</ul>\n<ul>\n<li>Experience with Identity &amp; Access Management (IAM), Single Sign-On (SSO), Security, or API-based solutions.</li>\n</ul>\n<ul>\n<li>Experience using at least one standard network security protocol (OAuth 2.0, OpenID Connect, SAML, LDAP, etc.).</li>\n</ul>\n<ul>\n<li>Knowledge/Experience with Cloud Platforms (AWS, GCP, Azure) and tools such as Kubernetes, Terraform, or Serverless environments.</li>\n</ul>\n<ul>\n<li>Hands-on development experience in one or more of the following areas: Web Development (JavaScript, HTML, Front-end frameworks), Mobile Development (iOS, Android), Backend Development (Java, C#, Node.js, Python, PHP, Ruby), or IP-based real-time communications.</li>\n</ul>\n<ul>\n<li>Knowledge of modern technologies such as Generative AI, Intelligent Agents, and Conversational Bots.</li>\n</ul>\n<ul>\n<li>Understanding of core application security (Password Hashing, SSL/TLS, EAR, XSS, XSRF).</li>\n</ul>\n<ul>\n<li>Experience speaking at technical conferences/webinars or writing technical blogs.</li>\n</ul>\n<ul>\n<li>Experience working as a software developer.</li>\n</ul>\n<ul>\n<li>Experience communicating and working with overseas engineers in English.</li>\n</ul>\n<p><strong>What you can expect as an Okta employee</strong></p>\n<ul>\n<li>Benefits</li>\n</ul>\n<ul>\n<li>Social Impact (Okta for Good)</li>\n</ul>\n<ul>\n<li>Okta&#39;s People, Connections, and Community</li>\n</ul>\n<p>Okta builds a dynamic work environment by providing the best tools, technology, and benefits so that employees can work productively in an environment that suits their needs. Each organisation has a unique way of working with flexibility and mobility so that every employee can be their most creative and successful self, regardless of where they live.</p>\n<p>Find your place at Okta today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2d870a29-495","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7515197","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Identity and Access Management","Single Sign-On","Security","API-based solutions","Cloud Platforms","Kubernetes","Terraform","Serverless environments","Web Development","Mobile Development","Backend Development","Generative AI","Intelligent Agents","Conversational Bots","Core application security"],"x-skills-preferred":["Pre-sales experience","Sales methodologies","Network security protocols","Hands-on development experience","Modern technologies","Technical writing","Public speaking"],"datePosted":"2026-04-18T15:47:54.594Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Identity and Access Management, Single Sign-On, Security, API-based solutions, Cloud Platforms, Kubernetes, Terraform, Serverless environments, Web Development, Mobile Development, Backend Development, Generative AI, Intelligent Agents, Conversational Bots, Core application security, Pre-sales experience, Sales methodologies, Network security protocols, Hands-on development experience, Modern technologies, Technical writing, Public speaking"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f516f0ef-a2d"},"title":"Senior Site Reliability Engineer (Auth0)","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission.</p>\n<p>As a Senior Site Reliability Engineer, you&#39;ll join our SRE team based in Europe to ensure our production systems are not only operational but also resilient, scalable, and ready for exponential growth. This isn&#39;t just about keeping the lights on; it&#39;s about directly contributing to the platform&#39;s core resiliency and robustness.</p>\n<p>You&#39;ll be a hands-on builder, crafting solutions that make our system more reliable by design.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Design and build custom software in Go to enhance the platform&#39;s reliability, resiliency, and redundancy.</li>\n<li>Partner with engineering teams to embed reliability principles, improving the availability, performance, and observability of our services.</li>\n<li>Use your deep understanding of infrastructure and observability principles to identify opportunities for improvement within the product and implement solutions.</li>\n<li>Contribute to our on-call rotation, providing rapid, effective response to critical incidents and using your expertise to troubleshoot, mitigate or accurately escalate production issues.</li>\n<li>Develop and refine our SRE tooling and processes, focusing on automation and operational efficiency.</li>\n<li>Define, document, and champion reliability best practices across the organisation.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>A proactive and systematic approach to problem-solving, with a high degree of ownership.</li>\n<li>Proven experience in a production environment supporting large-scale, mission-critical applications with a high degree of autonomy.</li>\n<li>Proficiency in at least one programming language, with a preference for Go. You should be comfortable writing custom applications, not just scripts.</li>\n<li>Experience with infrastructure as code (Terraform), container orchestration (Kubernetes, Docker) and GitOps (ArgoCD).</li>\n<li>Demonstrable expertise in a major cloud provider (Azure, AWS, or GCP).</li>\n<li>A strong grasp of microservices architecture, databases (SQL, NoSQL), and networking fundamentals, so you can understand how custom code can solve platform-level issues.</li>\n<li>An understanding of core SRE principles, including SLIs, SLOs, and error budgets.</li>\n<li>Experience in an on-call rotation for a 24/7 cloud-based environment.</li>\n<li>Exceptional communication and collaboration skills, with a proven ability to work effectively in a remote, distributed team, where tasks may be self-driven.</li>\n</ul>\n<p>We&#39;re looking for someone who is not just looking for a job, but a career-defining opportunity to tackle complex challenges at a massive scale. If you&#39;re a curious and motivated engineer who&#39;s passionate about building reliability directly into the platform, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f516f0ef-a2d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7791590","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$136,000-$187,000 CAD","x-skills-required":["Go","Terraform","Kubernetes","Docker","GitOps","Cloud provider (Azure, AWS, or GCP)","Microservices architecture","Databases (SQL, NoSQL)","Networking fundamentals","Core SRE principles (SLIs, SLOs, error budgets)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:10.665Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Terraform, Kubernetes, Docker, GitOps, Cloud provider (Azure, AWS, or GCP), Microservices architecture, Databases (SQL, NoSQL), Networking fundamentals, Core SRE principles (SLIs, SLOs, error budgets)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":187000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_782a1c68-325"},"title":"Senior DevOps Engineer","description":"<p>At ZoomInfo, we&#39;re looking for a Senior DevOps Engineer to join our Infrastructure Engineering group. As a Senior DevOps Engineer, you will be responsible for innovation in infrastructure and automation for ZoomInfo Engineering. You will have a strong background in modern infrastructure, with a thorough understanding of industry best practices. You will have a high level of comfort participating in challenging technical discussions and advocating for best practices in a high-paced environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Thorough, clear, concise documentation of new and existing standards, procedures, and automated workflows</li>\n<li>Championing of best practices and standards around infrastructure configuration and management</li>\n<li>Experience in creating internal products and managing their software development lifecycle</li>\n<li>Deployment, configuration, and management of infrastructure via infrastructure as code</li>\n<li>Working hands on with cloud infrastructure (AWS, Azure, and GCP)</li>\n<li>Working hands on with container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE, etc.)</li>\n<li>Configuration and management of Linux based tools and third-party cloud services</li>\n<li>Continuous improvement of our infrastructure, ensuring that it is highly available and observable</li>\n</ul>\n<p>Minimum Requirements:</p>\n<ul>\n<li>Solid foundation of experience managing Linux systems in virtual environments (6+ years)</li>\n<li>Deploying and maintaining highly available infrastructure in one or more Cloud providers (5+ years, AWS or GCP preferred)</li>\n<li>Infrastructure as code using Terraform (4+ years)</li>\n<li>Creating, deploying, maintaining, and troubleshooting Docker images (4+ years)</li>\n<li>Scoping, deploying, maintaining and troubleshooting Kubernetes clusters (4+ years)</li>\n<li>Developing and maintaining an active codebase in Go, Python preferably (3+ years)</li>\n<li>Experience with PaaS technologies (5+ years, EKS and GKE preferred)</li>\n<li>Maintaining monitoring and observability tools (Datadog, Prometheus preferred)</li>\n<li>Thorough understanding of network infrastructure and concepts (VPNs, routers and routing protocols, TCP/IP, IPv4 and v6, UDP, OSI layers, etc.)</li>\n<li>Experience with load balancing and proxy technologies (Istio, Nginx, HAProxy, Apache, Cloud load balancers, etc.)</li>\n<li>Debugging and troubleshooting complex problems in cloud-native infrastructure.</li>\n<li>Slack native mentality.</li>\n<li>Bachelor’s Degree in Computer Science or a related technical discipline, or the equivalent combination of education, technical certifications, training, or work experience.</li>\n</ul>\n<p>Abilities Required:</p>\n<ul>\n<li>Demonstrated ability to learn new technologies quickly and independently</li>\n<li>Strong technical, organizational and interpersonal skills</li>\n<li>Strong written and verbal communication skills</li>\n<li>Must be able to read, understand, and communicate complex problems and solutions in English over a textual medium (such as Slack)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_782a1c68-325","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8287254002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux","Cloud infrastructure (AWS, Azure, GCP)","Container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE)","Infrastructure as code (Terraform)","Go","Python","PaaS technologies (EKS, GKE)","Monitoring and observability tools (Datadog, Prometheus)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:10.427Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ra'anana, Israel"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, Cloud infrastructure (AWS, Azure, GCP), Container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE), Infrastructure as code (Terraform), Go, Python, PaaS technologies (EKS, GKE), Monitoring and observability tools (Datadog, Prometheus)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f3f1713-f74"},"title":"Systems Reliability Engineer","description":"<p>About Us</p>\n<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code.</p>\n<p>As a Systems Reliability Engineer on one of our Production Engineering teams, you&#39;ll be building the tools to help engineers deploy and operate the services that make Cloudflare work. Our mission is to provide a reliable, yet flexible, platform to help product teams release new software efficiently and safely.</p>\n<p>Core platforms we operate at Cloudflare include:</p>\n<ul>\n<li>Kubernetes</li>\n<li>Kafka</li>\n<li>Developer tools, CI, and CD systems</li>\n<li>Vault, Consul</li>\n<li>Terraform</li>\n<li>Temporal Workflows</li>\n<li>Cloudflare Developer Platform</li>\n</ul>\n<p>Responsibilities</p>\n<ul>\n<li>Build software that automates the operation of large, highly-available distributed systems.</li>\n<li>Ensure platform security, and guide security best practices</li>\n<li>Document your work and guide fellow developers towards optimal solutions</li>\n<li>Contribute back to the open source community</li>\n<li>Leave code better than we found it</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Recent career experience with Go or Python and at least 3 years experience in the role of full-time software engineer (any language). Rust is an added bonus.</li>\n<li>Experience with deploying and managing services using Docker on Linux</li>\n<li>A firm grasp of IP networking, load balancing and DNS</li>\n<li>Excellent debugging skills in a distributed systems environment</li>\n<li>Source control experience including branching, merging and rebasing (we use git)</li>\n<li>The ability to break down complex problems and drive towards a solution</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience with Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs on Kubernetes</li>\n<li>Operational experience deploying and managing large systems on bare metal</li>\n<li>Experience as a Site Reliability Engineer (SRE) for a large-scale company</li>\n<li>You have practical knowledge of web and systems performance, and extensively used tracing tools like ebpf and strace.</li>\n<li>Alerting and monitoring (Prometheus/Alert Manager), Configuration Management (salt)</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f3f1713-f74","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7453074","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Python","Docker","Linux","IP networking","load balancing","DNS","source control","git","Kubernetes","Kafka","Vault","Consul","Terraform","Temporal Workflows","Cloudflare Developer Platform"],"x-skills-preferred":["Rust","Deployment","StatefulSets","Persistent Volumes Claims","Ingresses","CRDs","ebpf","strace","Prometheus","Alert Manager","salt"],"datePosted":"2026-04-18T15:47:02.171Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Docker, Linux, IP networking, load balancing, DNS, source control, git, Kubernetes, Kafka, Vault, Consul, Terraform, Temporal Workflows, Cloudflare Developer Platform, Rust, Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs, ebpf, strace, Prometheus, Alert Manager, salt"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_711f5c89-ed8"},"title":"Senior Staff Machine Learning Engineer, GenAI Platform","description":"<p>As a Senior Staff Machine Learning Engineer, you will help define and lead the vision for Reddit&#39;s large-scale GenAI Platform, shaping the strategy, architecture, and operating model that enable teams across the company to build, deploy, and scale generative AI products with confidence.</p>\n<p>Contribute to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</p>\n<p>Lead and execute the vision, strategy, and roadmap for Reddit&#39;s large-scale GenAI Platform.</p>\n<p>Define the platform architecture and operating model that enable teams to build, deploy, and scale GenAI products reliably.</p>\n<p>Drive the strategy for a unified LAG Gateway supporting internally and externally hosted LLMs through consistent APIs and abstractions.</p>\n<p>Set the direction for core platform capabilities such as rate and token limit management, intelligent failover, and production resilience.</p>\n<p>Shape Reddit&#39;s approach to an enterprise-grade RAG system.</p>\n<p>Establish the strategic direction for agentic AI workflows and tool-use patterns across the platform.</p>\n<p>Own the end-to-end platform strategy from concept through production adoption and long-term evolution.</p>\n<p>Drive MLOps and LLMOps standards across CI/CD, testing, versioning, evaluation, and lifecycle management.</p>\n<p>Define best practices for observability, monitoring, governance, and operational excellence across GenAI systems.</p>\n<p>Partner across engineering, product, and leadership to align platform investments with company priorities and user needs.</p>\n<p>Champion platform thinking with a strong focus on scalability, reliability, performance, and developer experience.</p>\n<p>Influence technical direction across teams by turning emerging AI capabilities into a scalable platform strategy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_711f5c89-ed8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7772274","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$292,500-$409,500 USD","x-skills-required":["Machine Learning","GenAI Platform","LLM Gateway","API Endpoints","Rate/Token Limit Management","Intelligent Failover","Kubernetes","Cloud-Based Technologies","AWS","Google Cloud Storage","Infrastructure-as-Code","Terraform","Go","Python","CI/CD","Testing","Versioning","Evaluation","Lifecycle Management","Observability","Monitoring","Governance","Operational Excellence"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:48.652Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, GenAI Platform, LLM Gateway, API Endpoints, Rate/Token Limit Management, Intelligent Failover, Kubernetes, Cloud-Based Technologies, AWS, Google Cloud Storage, Infrastructure-as-Code, Terraform, Go, Python, CI/CD, Testing, Versioning, Evaluation, Lifecycle Management, Observability, Monitoring, Governance, Operational Excellence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":292500,"maxValue":409500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_87c43ead-4a1"},"title":"Staff Site Reliability Engineer, Security- GCP","description":"<p>Secure Every Identity</p>\n<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>\n<p>Okta&#39;s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Staff Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure.</p>\n<p>We encourage you to prescribe defence-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level.</p>\n<p>Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance.</p>\n<p>We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product.</p>\n<p>This is a high-impact role in a security-centric, fast-paced organisation that is poised for massive growth and success.</p>\n<p>You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap.</p>\n<p>You will focus on engineering security aspects of the systems used across our services.</p>\n<p>Join us and be part of a company that is about to change the cloud computing landscape forever.</p>\n<p>As a Staff Engineer, you should be able to identify gaps, propose innovative solutions, and contribute to roadmaps while driving alignment across multiple teams within the organisation.</p>\n<p>Additionally, you should serve as a role model, providing technical mentorship to junior team members and fostering a culture of learning and growth</p>\n<p>What are we looking for?</p>\n<p>We are looking for a security-first SRE engineer who doesn&#39;t just &#39;flag&#39; issues but builds the automation to solve them.</p>\n<p>You should have a deep-seated intuition for cloud-native security and a proven track record of hardening large-scale GCP and AWS environments.</p>\n<p>As a Technical SME, you will design and build production infrastructure with a &#39;security-at-scale&#39; mindset.</p>\n<p>What You Will Work On?</p>\n<p>Security Evangelism: Lead initiatives to strengthen our security posture for critical infrastructure and promote best practices across the engineering organisation.</p>\n<p>Incident Response &amp; Reliability: Respond to production security incidents, perform root cause analysis, and build automated preventions to ensure high performance and reliability.</p>\n<p>Automated Hardening: Identify manual security processes and automate them using custom tooling and CI/CD integrations.</p>\n<p>Architecture &amp; Documentation: Develop technical documentation, runbooks, and procedures for a 24x7 online environment.</p>\n<p>Platform Evolution: Continuously evolve our monitoring platforms, moving from simple auditing to active, automated prevention.</p>\n<p>Minimum Required Knowledge, Skills, &amp; Abilities:</p>\n<p>Experience: 8+ years of experience architecting and running complex cloud networking and infrastructure, with at least 7+ years specialised in DevSecOps or Cloud Security.</p>\n<p>GCP Expertise: Minimum 3+ years of deep, hands-on experience securing GCP (GKE, GCE, Shared VPC etc).</p>\n<p>Infrastructure as Code (IaC): 10+ years of experience using Terraform and Chef to manage complex cloud resources and OS hardening.</p>\n<p>Automation Mastery: Expert-level proficiency in Go, Python, or Ruby for building custom security tooling and automated remediation.</p>\n<p>Hardened Containers: Proven track record of securing containerised workloads, including image scanning, K8s RBAC, and runtime security tools (e.g., CrowdStrike Falcon, Falco, or gVisor).</p>\n<p>Unflappable Troubleshooting: A &#39;see a problem, fix the problem&#39; mindset with the ability to debug complex networking, IAM, or performance issues under pressure.</p>\n<p>Security Foundations: Strong grasp of Linux internals, OS hardening (CIS benchmarks), and IP protocols (TLS/SSL, DNSSEC, BGP).</p>\n<p>Education: BS in Computer Science or equivalent professional experience.</p>\n<p>Key Responsibilities:</p>\n<p>IAM &amp; Secrets Management: Design and maintain large-scale production IAM policies and secrets management workflows.</p>\n<p>Infrastructure Hardening: Implement and maintain Public Key Infrastructure (PKI) and ensure all GCE/GKE environments meet strict compliance standards.</p>\n<p>Operational Excellence: Utilise industry-standard tools like OSQuery, Splunk, Chronicle, Nessus, or Qualys/Crowdstrike to monitor system health and security telemetry.</p>\n<p>Strategic Rollouts: Lead the phased transition of security policies from Audit/Detection mode to Blocking/Prevention mode, ensuring zero impact on production uptime.</p>\n<p>Bonus Points For:</p>\n<p>Multi-Cloud IAM Governance: Experience designing a unified IAM framework across AWS and GCP, utilising federated Identities such as Workload, Workforce Identity Federation with understanding of SAML &amp; OIDC auth mechanism and automated &#39;Least Privilege&#39; enforcement.</p>\n<p>Cloud-Native Reliability Engineering: Deep understanding of multi-cloud reliability patterns, maintaining high availability (HA) during security patching or infrastructure-wide hardening.</p>\n<p>Hardened Kubernetes Orchestration: Advanced experience securing GKE, EKS, and kOps, specifically implementing Pod Security Standards, Network Policies, and Admission Controllers for a &#39;Zero-Trust&#39; posture.</p>\n<p>Threat Modeling: Security Reviews &amp; Threat Modeling at both Design &amp; Implementation scope.</p>\n<p>The Okta Experience - Supporting Your Well-Being - Driving Social Impact - Developing Talent and Fostering Connection + Community</p>\n<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>\n<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>\n<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>\n<p>Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_87c43ead-4a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/6671260","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud-native security","GCP","AWS","DevSecOps","Cloud Security","Terraform","Chef","Go","Python","Ruby","containerised workloads","image scanning","K8s RBAC","runtime security tools","Linux internals","OS hardening","IP protocols","TLS/SSL","DNSSEC","BGP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:47.221Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud-native security, GCP, AWS, DevSecOps, Cloud Security, Terraform, Chef, Go, Python, Ruby, containerised workloads, image scanning, K8s RBAC, runtime security tools, Linux internals, OS hardening, IP protocols, TLS/SSL, DNSSEC, BGP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbeabfab-916"},"title":"Software Engineer, Observability","description":"<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>\n<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>\n<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>\n<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>\n<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>\n<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbeabfab-916","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4587675006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$109,000 to $145,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","observability systems","metrics","logging","tracing"],"x-skills-preferred":["ClickHouse","Elastic","Loki","VictoriaMetrics","Prometheus","Thanos","OpenTelemetry","Grafana","Terraform","modern testing frameworks","deployment strategies","data streaming technologies","AI/ML infrastructure"],"datePosted":"2026-04-18T15:46:41.788Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":109000,"maxValue":145000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a144188-686"},"title":"Solutions Engineer, Benelux","description":"<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. As a Solutions Engineer, you will be part of the Pre-Sales Solution Engineering organisation, owning the technical sale of the Cloudflare solution portfolio. You will work closely with our customers and partners to educate, empower, and ensure their success delivering Cloudflare security, reliability, and performance solutions.</p>\n<p>Your role will be to build champions and enable technical teams alongside our Benelux sales organisation to drive pipeline and close deals. As the technical advocate inside Cloudflare, you will work closely with teams across Sales, Product, Engineering, Customer Support and our channel partners to ensure our customers succeed with Cloudflare security, reliability, and performance solutions.</p>\n<p>We are looking for someone with strong experience in pre-sales, partner and account management, and excellent verbal and written communication skills in Dutch and English, suited for both technical and executive-level engagement. You should be comfortable speaking about the Cloudflare vision with all audiences.</p>\n<p>Specifically, we are looking for you to:</p>\n<ul>\n<li>Build and maintain long-term technical relationships with prospects, customers and ecosystem organisations across Benelux through demonstrating value, enablement, and uncovering new areas of potential revenue</li>\n<li>Drive technical solution design conversations through use case qualification and collaborative technical wins through demonstrations and proofs-of-concept</li>\n<li>Develop passionate technical champions within the technology ranks of your accounts, helping them drive sales for identified opportunities and build revenue pipeline</li>\n<li>Evangelize and represent Cloudflare through technical thought leadership and expertise</li>\n<li>Be the voice of the market internally at Cloudflare, engaging with and influencing Product and Engineering teams to meet the needs of your accounts and their customers</li>\n</ul>\n<p>You will travel requirement in the Benelux to support engagements, attend conferences and industry events, and collaborate with Cloudflare teammates.</p>\n<p>Examples of desirable skills, knowledge and experience include:</p>\n<ul>\n<li>Fluency in Dutch and English (verbal and written)</li>\n<li>Ability to communicate complex technical concepts to both technical and non-technical audiences, including C-level stakeholders</li>\n<li>Strong presentation and storytelling skills (whiteboarding, demos, executive briefings)</li>\n<li>Experience managing technical sales cycles end-to-end</li>\n<li>Ability to articulate business value and ROI of technical solutions, not just features</li>\n<li>Experience working within an integrated account team (alongside Account Executives, Customer Success, BDRs, and channel partners)</li>\n<li>Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting</li>\n<li>Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN</li>\n<li>Cloud networking concepts: VPCs, peering, interconnect</li>\n<li>DDoS attack types (L3/L4/L7) and mitigation strategies</li>\n<li>Web Application Firewall (WAF) rule configuration and tuning</li>\n<li>VPN concepts and their limitations relative to Zero Trust approaches</li>\n<li>API security: API Gateway, rate limiting, schema validation, abuse prevention</li>\n<li>Bot management concepts and detection techniques</li>\n<li>SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform)</li>\n<li>Zero Trust Network Access (ZTNA) vs. traditional VPN architecture</li>\n<li>HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics</li>\n<li>Detailed understanding of the flow from user to application, including hybrid cloud architectures</li>\n<li>Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models)</li>\n<li>Familiarity with Infrastructure-as-Code concepts (e.g. Terraform)</li>\n<li>Cloudflare Workers and the edge compute model (JavaScript/TypeScript)</li>\n<li>Familiarity with related primitives: KV, Object storage, serverless compute</li>\n<li>Familiarity with the competitive landscape across Cloudflare&#39;s product areas</li>\n<li>Understanding of why customers move from on-premises appliances to cloud-delivered security</li>\n<li>Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare</li>\n</ul>\n<p>We value intellectual curiosity, adaptability, and a collaborative spirit. On the Solutions Engineering team, you will find an environment where everyone brings different strengths and jumps in to help each other. If you are passionate about technology and look forward to helping customers and ecosystem organisations realise the full promise of Cloudflare, we&#39;d love to hear from you.</p>\n<p>What makes Cloudflare special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a144188-686","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7742347","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting","Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN","Cloud networking concepts: VPCs, peering, interconnect","DDoS attack types (L3/L4/L7) and mitigation strategies","Web Application Firewall (WAF) rule configuration and tuning","VPN concepts and their limitations relative to Zero Trust approaches","API security: API Gateway, rate limiting, schema validation, abuse prevention","Bot management concepts and detection techniques","SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform)","Zero Trust Network Access (ZTNA) vs. traditional VPN architecture","HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics","Detailed understanding of the flow from user to application, including hybrid cloud architectures","Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models)","Familiarity with Infrastructure-as-Code concepts (e.g. Terraform)","Cloudflare Workers and the edge compute model (JavaScript/TypeScript)","Familiarity with related primitives: KV, Object storage, serverless compute","Familiarity with the competitive landscape across Cloudflare's product areas","Understanding of why customers move from on-premises appliances to cloud-delivered security","Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:26.177Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting, Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN, Cloud networking concepts: VPCs, peering, interconnect, DDoS attack types (L3/L4/L7) and mitigation strategies, Web Application Firewall (WAF) rule configuration and tuning, VPN concepts and their limitations relative to Zero Trust approaches, API security: API Gateway, rate limiting, schema validation, abuse prevention, Bot management concepts and detection techniques, SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform), Zero Trust Network Access (ZTNA) vs. traditional VPN architecture, HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics, Detailed understanding of the flow from user to application, including hybrid cloud architectures, Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models), Familiarity with Infrastructure-as-Code concepts (e.g. Terraform), Cloudflare Workers and the edge compute model (JavaScript/TypeScript), Familiarity with related primitives: KV, Object storage, serverless compute, Familiarity with the competitive landscape across Cloudflare's product areas, Understanding of why customers move from on-premises appliances to cloud-delivered security, Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3922bc3d-027"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>Some example teams you can join include:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>10+ years of production-level experience in one of: Java, Scala, C++, or similar language</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Experience in architecting, developing, deploying, and operating large-scale distributed systems</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n<li>Good knowledge of SQL</li>\n<li>Experience with software security and systems that handle sensitive data</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3922bc3d-027","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544443002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:24.664Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fca5411d-4fb"},"title":"Staff Site Reliability Engineer - Kubernetes","description":"<p>Secure Every Identity, from AI to Human</p>\n<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>\n<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p>Workforce Identity Cloud</p>\n<p>Okta Workforce Identity Cloud (WIC) provides easy, secure access for your workforce so you can focus on other strategic priorities,like reducing costs, and doing more for your customers.</p>\n<p>If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, “If you have to do something more than once, automate it” and who can rapidly self-educate on new concepts and tools.</p>\n<p><strong>Position Overview:</strong></p>\n<p>The Site Reliability Engineer (SRE) will play a key role in building and managing Kubernetes platforms that support cloud-native applications and services. This position focuses on architecting and managing reliable, scalable, and secure Kubernetes-based platforms on AWS, ensuring high availability and performance while optimising costs and automation. The ideal candidate will have hands-on experience with AWS infrastructure, Kubernetes platform creation, Helm charts, Karpenter scaling, and Istio service mesh.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Kubernetes Platform Creation: Design, implement, and maintain highly available, scalable, and fault-tolerant Kubernetes platforms. Ensure clusters are optimised for production workloads, providing high resilience and operational efficiency.</li>\n</ul>\n<ul>\n<li>AWS Infrastructure Management: Build, manage, and optimise AWS cloud infrastructure, including EKS, ECS, S3, VPCs, RDS, IAM, and more. Implement best practices for cost management, scaling, and security within AWS.</li>\n</ul>\n<ul>\n<li>Helm Management: Utilise Helm to automate and streamline the deployment of applications and services to Kubernetes clusters. Create, maintain, and manage Helm charts for production-ready deployments.</li>\n</ul>\n<ul>\n<li>Karpenter Implementation: Implement and manage Karpenter to dynamically scale Kubernetes clusters in response to workload demands.</li>\n</ul>\n<ul>\n<li>Istio Service Mesh Management: Configure and manage Istio to provide service-to-service communication, security, and observability within the Kubernetes clusters. Enable fine-grained traffic management, service discovery, and policy enforcement.</li>\n</ul>\n<ul>\n<li>Platform Automation &amp; Scaling: Automate the deployment, scaling, and management of infrastructure and applications. Work with CI/CD pipelines to ensure a seamless flow from development to production with minimal downtime.</li>\n</ul>\n<ul>\n<li>Incident Management &amp; Troubleshooting: Respond to incidents, troubleshoot, and resolve system issues related to performance, availability, and security in a timely and effective manner.</li>\n</ul>\n<ul>\n<li>Security &amp; Compliance: Design and implement secure cloud infrastructure with appropriate access controls, network security, and compliance frameworks.</li>\n</ul>\n<ul>\n<li>Documentation &amp; Knowledge Sharing: Create and maintain detailed documentation for Kubernetes platform setup, operational procedures, and best practices. Promote knowledge sharing across teams.</li>\n</ul>\n<p><strong>Required Qualifications:</strong></p>\n<ul>\n<li>4+ years of experience with Kubernetes/Helm;</li>\n</ul>\n<ul>\n<li>4+ years of Experience with Terraform.</li>\n</ul>\n<ul>\n<li>5+ years of Experience with AWS</li>\n</ul>\n<ul>\n<li>Experience with multi-region cloud environments.</li>\n</ul>\n<ul>\n<li>Proven experience with AWS (EC2, RDS, S3, CloudFormation, IAM, etc.) and solid understanding of cloud-native architectures.</li>\n</ul>\n<ul>\n<li>Strong expertise in Kubernetes platform creation, management, and optimisation (e.g., setting up highly available clusters, networking, and storage).</li>\n</ul>\n<ul>\n<li>Hands-on experience with Helm for Kubernetes application deployment and management.</li>\n</ul>\n<ul>\n<li>Practical experience with Karpenter for dynamic scaling of Kubernetes clusters and optimising resource usage.</li>\n</ul>\n<ul>\n<li>Expertise in managing and securing Istio for service mesh, including traffic management, security, and observability features.</li>\n</ul>\n<ul>\n<li>Proficiency in CI/CD pipelines and automation tools (e.g., Jenkins, GitLab, CircleCI, Terraform, Ansible, Spinnaker).</li>\n</ul>\n<ul>\n<li>Strong scripting and automation skills in Python, Bash, or Go for infrastructure management and platform automation.</li>\n</ul>\n<ul>\n<li>Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, CloudWatch, and ELK Stack.</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Understanding of security best practices for cloud platforms and Kubernetes (e.g., role-based access control (RBAC), encryption, and compliance frameworks).</li>\n</ul>\n<ul>\n<li>Familiarity with Docker and containerization principles.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent professional experience).</li>\n</ul>\n<ul>\n<li>Certifications (Preferred): CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), or AWS Certified DevOps Engineer are highly desirable.</li>\n</ul>\n<p>Additional requirements:</p>\n<ul>\n<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>\n</ul>\n<ul>\n<li>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</li>\n</ul>\n<p>#LI-Hybrid</p>\n<p>#LI-LSS1</p>\n<p>requisition ID- (P16373_3396241)</p>\n<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$267,000 USD</p>\n<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>\n<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$174,000-$214,000 USD</p>\n<p>The Okta Experience</p>\n<ul>\n<li>Supporting Your Well-Being</li>\n</ul>\n<ul>\n<li>Driving Social Impact</li>\n</ul>\n<ul>\n<li>Developing Talent and Fostering Connection + Community</li>\n</ul>\n<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fca5411d-4fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7743339","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$174,000-$214,000 USD","x-skills-required":["Kubernetes","Helm","Terraform","AWS","Cloud-native architectures","Kubernetes platform creation","Kubernetes management","Kubernetes optimisation","Helm for Kubernetes application deployment","Karpenter for dynamic scaling","Istio for service mesh","CI/CD pipelines","Automation tools","Python","Bash","Go","Monitoring","Logging","Alerting"],"x-skills-preferred":["Security best practices for cloud platforms and Kubernetes","Docker and containerization principles","Certified Kubernetes Administrator","Certified Kubernetes Application Developer","AWS Certified DevOps Engineer"],"datePosted":"2026-04-18T15:46:19.185Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington; Chicago, Illinois; New York, New York; San Francisco, California; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Helm, Terraform, AWS, Cloud-native architectures, Kubernetes platform creation, Kubernetes management, Kubernetes optimisation, Helm for Kubernetes application deployment, Karpenter for dynamic scaling, Istio for service mesh, CI/CD pipelines, Automation tools, Python, Bash, Go, Monitoring, Logging, Alerting, Security best practices for cloud platforms and Kubernetes, Docker and containerization principles, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified DevOps Engineer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":174000,"maxValue":214000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6984004d-b3f"},"title":"Intermediate Backend Engineer, Gitlab Delivery: Upgrades","description":"<p>As a Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab with assurance by building and supporting the deployment tooling, infrastructure, and automation behind how GitLab is installed, upgraded, and operated.</p>\n<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (GET), and the GitLab Operator to improve reliability, security, and scalability in production-grade environments. This is a hands-on role where you&#39;ll partner with Distribution Engineers, Site Reliability Engineers, Release Managers, Security, and Development teams to make self-managed GitLab easier to use across a wide range of platforms.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>Evolve Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support new GitLab features and architectures</li>\n</ul>\n<ul>\n<li>Improve installation, upgrade, and validation automation for large-scale self-managed GitLab deployments</li>\n</ul>\n<p>Maintain and improve the Omnibus GitLab package so GitLab components work reliably in self-managed deployments.</p>\n<p>Develop and support GitLab Helm Charts for scalable, production-ready Kubernetes deployments.</p>\n<p>Enhance the GitLab Environment Toolkit (GET) and validated reference architectures used by enterprise and internal users.</p>\n<p>Support and extend the GitLab Operator for Kubernetes-native lifecycle management of GitLab installations.</p>\n<p>Improve the installation, upgrade, and day-to-day operating experience across supported self-managed platforms.</p>\n<p>Collaborate with Security to address vulnerabilities and strengthen secure defaults and configurations across the deployment stack.</p>\n<p>Build and maintain automation and continuous integration and continuous deployment pipelines that validate deployment tooling across Omnibus, Charts, GET, and the Operator.</p>\n<p>Partner with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features and keep user-facing documentation accurate and useful.</p>\n<p>Experience building and maintaining backend services in production environments, especially in deployment, infrastructure, or platform tooling.</p>\n<p>Practical knowledge of Kubernetes operations, including authoring and maintaining Helm charts.</p>\n<p>Proficiency with Ruby and Go, along with scripting skills to automate workflows and tooling.</p>\n<p>Familiarity with Terraform and infrastructure as code practices across cloud and on-premises environments.</p>\n<p>Hands-on experience with relational databases, especially PostgreSQL, including performance and reliability considerations.</p>\n<p>Understanding of secure, scalable, and supportable deployment practices, along with observability tools such as Prometheus and Grafana.</p>\n<p>Experience collaborating in large codebases and distributed teams, including writing clear user-facing documentation and implementation guides.</p>\n<p>Openness to learning new technologies and applying transferable skills across different parts of the GitLab deployment stack.</p>\n<p>The Upgrades team is part of GitLab Delivery and delivers GitLab to self-managed users through supported, validated deployment tooling. The team maintains Omnibus GitLab, Helm Charts, the GitLab Operator, and the GitLab Environment Toolkit (GET) to help self-managed users deploy GitLab securely and reliably across diverse environments. You&#39;ll join a distributed group of backend engineers that works asynchronously across time zones and collaborates closely with Site Reliability Engineering, Release, Security, and Development teams. The team is focused on improving installation and upgrade workflows, strengthening automation and security, and helping self-managed customers run GitLab successfully at any scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6984004d-b3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8463951002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ruby","Go","Kubernetes","Helm charts","Terraform","infrastructure as code","PostgreSQL","relational databases","observability tools","Prometheus","Grafana"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:16.737Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby, Go, Kubernetes, Helm charts, Terraform, infrastructure as code, PostgreSQL, relational databases, observability tools, Prometheus, Grafana"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9057e192-450"},"title":"Security Engineer Lead, Corporate Security","description":"<p>We&#39;re looking for a Security Engineering Lead to own and drive Anthropic&#39;s Corporate Security program. This is a player-coach Tech Lead Manager (TLM) role: you&#39;ll be both the most senior technical individual contributor on corporate security and the people leader for a lean, high-impact team of Security Engineers.</p>\n<p>You will set the technical direction, write code and ship tooling alongside your team, and build the culture and processes that allow the team to scale.</p>\n<p>Corporate Security at Anthropic encompasses everything that protects our people, endpoints, networks, SaaS ecosystem, and corporate data,the full surface area outside of production infrastructure.</p>\n<p>The scope is broad and the team is deliberately small, which means you&#39;ll need deep technical skills across multiple domains, strong judgment about where to invest, and a bias toward automation and engineering-driven solutions over manual process.</p>\n<p>You&#39;ll report into Security leadership and partner closely with IT, Infrastructure Security, Detection &amp; Response, and GRC teams.</p>\n<p>This role is high-visibility and high-autonomy: you&#39;ll be expected to define the roadmap, make architectural decisions, and represent Corporate Security across the company.</p>\n<p><strong>Responsibilities:</strong></p>\n<p><strong>Technical Leadership &amp; Hands-on Engineering</strong></p>\n<ul>\n<li>Own the security architecture, tooling, and controls for Anthropic&#39;s corporate environment end-to-end, including endpoint fleets (macOS, Windows, ChromeOS), campus and office networks, SaaS applications, mobile devices</li>\n</ul>\n<ul>\n<li>Design, build, and ship security automation, integrations, and internal tooling,including leveraging Claude and LLMs to accelerate security workflows</li>\n</ul>\n<ul>\n<li>Define and enforce security baselines, hardening standards, and configuration policies across all corporate platforms</li>\n</ul>\n<ul>\n<li>Define what it means to operate safely in an environment where AI agents act more like humans than actual humans</li>\n</ul>\n<ul>\n<li>Evaluate, select, deploy, and operate corporate security tools (EDR/XDR, MDM, ZTNA, CASB/SSPM, email security, DLP, browser security, etc.)</li>\n</ul>\n<ul>\n<li>Drive vulnerability management for corporate assets, including patch orchestration, risk-based prioritization, and exception management</li>\n</ul>\n<ul>\n<li>Lead security reviews of new SaaS adoptions, corporate infrastructure changes, and IT projects</li>\n</ul>\n<p><strong>People Leadership &amp; Team Building</strong></p>\n<ul>\n<li>Manage, mentor, and grow a purposefully lean team of Security Engineers; set clear expectations, run effective 1:1s, and create an environment where engineers do the best work of their careers</li>\n</ul>\n<ul>\n<li>Hire and build the team as scope expands,own the hiring bar and pipeline for Corporate Security Engineering roles</li>\n</ul>\n<ul>\n<li>Balance your own IC contributions with the team’s needs; know when to go deep on a problem yourself and when to delegate and coach</li>\n</ul>\n<ul>\n<li>Foster a culture of operational excellence, blameless incident review, and continuous improvement</li>\n</ul>\n<p><strong>Strategy &amp; Cross-Functional Partnership</strong></p>\n<ul>\n<li>Define and own the Corporate Security roadmap, aligning investments to Anthropic’s risk profile and growth trajectory</li>\n</ul>\n<ul>\n<li>Partner with IT Operations to ensure security is embedded in endpoint provisioning, network design, and SaaS lifecycle management</li>\n</ul>\n<ul>\n<li>Collaborate with Detection &amp; Response on telemetry coverage, detection engineering, and incident handling for corporate-sourced events</li>\n</ul>\n<ul>\n<li>Partner with Infrastructure and Security Engineering teams to ensure security standards are consistent across all of Anthropic</li>\n</ul>\n<ul>\n<li>Communicate security posture, risks, and investment needs to Security leadership and cross-functional stakeholders clearly and persuasively</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>8+ years of Security Engineering experience in a corporate/enterprise security domain (endpoint security, network security, SaaS security, identity, or a combination)</li>\n</ul>\n<ul>\n<li>2+ years of experience managing or tech-leading a team of engineers, with a demonstrated track record of developing talent and shipping results through others</li>\n</ul>\n<ul>\n<li>Are a strong engineer who still writes code regularly,you can prototype a tool, write a detection, build an integration, or debug a complex configuration issue</li>\n</ul>\n<ul>\n<li>Have deep experience with macOS fleet security (this is our primary platform) and solid working knowledge of Windows and ChromeOS security</li>\n</ul>\n<ul>\n<li>Have hands-on experience deploying and operating EDR/XDR, MDM, ZTNA/zero trust, and identity security solutions at scale</li>\n</ul>\n<ul>\n<li>Understand modern SaaS security challenges: shadow IT, OAuth token sprawl, data exfiltration paths, SaaS-to-SaaS integrations, and SSPM/CASB tooling</li>\n</ul>\n<ul>\n<li>Can work independently with high autonomy, manage ambiguity, and make sound risk-based prioritization decisions in a fast-paced environment</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and can translate complex security topics into clear recommendations for technical and non-technical audiences</li>\n</ul>\n<p><strong>Strong Candidates May Have</strong></p>\n<ul>\n<li>Securing corporate environments at high-growth AI, cloud, or developer-tools companies</li>\n</ul>\n<ul>\n<li>Maturing a Corporate Security function from early stage, including defining scope, selecting the initial toolset, and hiring the founding team</li>\n</ul>\n<ul>\n<li>Advanced macOS security (system extensions, endpoint security framework, MDM profile engineering, Declarative Device Management)</li>\n</ul>\n<ul>\n<li>Network security architecture for hybrid/multi-office environments, including SD-WAN, ZTNA, DNS security, and network segmentation</li>\n</ul>\n<ul>\n<li>Browser security and isolation technologies (e.g., Island, Talon/Palo Alto, Chrome Enterprise)</li>\n</ul>\n<ul>\n<li>Proficiency in Python, Go, or similar languages for building security tooling and automation</li>\n</ul>\n<ul>\n<li>Experience leveraging LLMs/AI to augment security operations, build investigative tooling, or automate policy enforcement</li>\n</ul>\n<ul>\n<li>Familiarity with IaC (Terraform), CI/CD pipelines, and DevSecOps practices as they apply to corporate infrastructure management</li>\n</ul>\n<ul>\n<li>Mobile security for iOS/Android in a BYOD and corporate-managed device environment</li>\n</ul>\n<ul>\n<li>Data Loss Prevention (DLP) program design and implementation across endpoints, email, SaaS, and cloud storage</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9057e192-450","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5135098008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["macOS fleet security","Windows and ChromeOS security","EDR/XDR","MDM","ZTNA/zero trust","identity security solutions","SaaS security challenges","shadow IT","OAuth token sprawl","data exfiltration paths","SaaS-to-SaaS integrations","SSPM/CASB tooling"],"x-skills-preferred":["Python","Go","LLMs/AI","IaC (Terraform)","CI/CD pipelines","DevSecOps practices","mobile security","Data Loss Prevention (DLP)"],"datePosted":"2026-04-18T15:46:05.148Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"macOS fleet security, Windows and ChromeOS security, EDR/XDR, MDM, ZTNA/zero trust, identity security solutions, SaaS security challenges, shadow IT, OAuth token sprawl, data exfiltration paths, SaaS-to-SaaS integrations, SSPM/CASB tooling, Python, Go, LLMs/AI, IaC (Terraform), CI/CD pipelines, DevSecOps practices, mobile security, Data Loss Prevention (DLP)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a514157f-198"},"title":"Senior Manager, Site Reliability Engineering -  Infrastructure Platform","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p>The Infrastructure Platform and Shared Services Team Okta authenticates, authorises and provisions millions of users a day. The service is hosted on Amazon Web Services (AWS) across multiple availability zones and geographically separated regions. The service is designed for high throughput and 99.999 availability.</p>\n<p>We&#39;re looking for a technical leader to help us continue to scale the service with great people and reliable, cost-effective, and efficient infrastructure, processes, and tooling.</p>\n<p>As the Sr. Manager of Infrastructure Platform and Shared Services, you will oversee multiple teams focused on Edge networking, K8s platform, CI/CD, Observability, automation platform &amp; tooling.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead the Infra platform and shared services org and various initiatives across SRE &amp; Infrastructure organisation.</li>\n</ul>\n<ul>\n<li>Lead the DevOps transformation, microservice journey, and next generation Infra platform capabilities in partnership with architects and product engineering.</li>\n</ul>\n<ul>\n<li>Build a world-class observability platform and monitoring capabilities enabled with self-service.</li>\n</ul>\n<ul>\n<li>Accelerate the velocity of SRE and product engineering by developing robust platforms, powerful tooling, and intuitive self-service capabilities.</li>\n</ul>\n<ul>\n<li>Own the design and operation of scalable, self-service Cloud infrastructure platforms (e.g., Kubernetes, service mesh, CI/CD pipelines, IaC &amp; Edge Infrastructure).</li>\n</ul>\n<ul>\n<li>Lead, mentor, and grow a high-performing team of engineers and managers across platform, infrastructure, and shared services domains.</li>\n</ul>\n<ul>\n<li>Perform engineering design evaluations and ensure the completion of projects within resource, budget, and scheduling constraints.</li>\n</ul>\n<ul>\n<li>Improve SDLC processes for Cloud infrastructure as a code, including the maturity of CI/CD pipelines, change and release management.</li>\n</ul>\n<ul>\n<li>Manage service and business expectations and prioritise resource allocation.</li>\n</ul>\n<ul>\n<li>Maintain a deep knowledge of industry best practices, evolving trends, and technologies.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>6+ years of experience in technical leadership &amp; people management.</li>\n</ul>\n<ul>\n<li>Extensive experience using Agile and DevOps methodologies to build product infrastructure and shared service at scale.</li>\n</ul>\n<ul>\n<li>3+ years of experience running large-scale infrastructure platforms supporting a SaaS/Cloud service in a public Cloud, preferably AWS. Experience supporting a multi-Cloud environment will be a plus.</li>\n</ul>\n<ul>\n<li>Strong expertise in cloud-native architectures, containerisation (Kubernetes), IaC (Terraform), and CI/CD pipelines.</li>\n</ul>\n<ul>\n<li>Strong background and hands-on experience in SW development, PaaS and automation.</li>\n</ul>\n<ul>\n<li>Deep experience with building and operating observability platforms and monitoring tools (Grafana, Splunk, APM etc.) in a large scale environment.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to lead cross-functional teams and manage large-scale programs.</li>\n</ul>\n<ul>\n<li>Effective verbal, written communication and interpersonal skills.</li>\n</ul>\n<ul>\n<li>Computer Science Degree or related degree or equivalent experience.</li>\n</ul>\n<p>Additional requirements:</p>\n<ul>\n<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a514157f-198","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7317857","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$176,000-$264,000 USD","x-skills-required":["cloud-native architectures","containerisation (Kubernetes)","IaC (Terraform)","CI/CD pipelines","SW development","PaaS and automation","observability platforms and monitoring tools (Grafana, Splunk, APM etc.)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:57.955Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud-native architectures, containerisation (Kubernetes), IaC (Terraform), CI/CD pipelines, SW development, PaaS and automation, observability platforms and monitoring tools (Grafana, Splunk, APM etc.)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176000,"maxValue":264000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9f2e3373-2d6"},"title":"Senior Software Engineer - Platform Network","description":"<p>Secure Every Identity =========================</p>\n<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>The Platform Network Engineering Team -----------------------------------</p>\n<p>Auth0 by Okta is an easy-to-implement authentication and authorization platform designed by developers for developers. We make access to applications safe, secure, and seamless for over 100 million daily logins worldwide.</p>\n<p>Our modern approach to identity enables this Tier 0 global service to deliver convenience, privacy, and security so customers can focus on innovation.</p>\n<p>The Senior Software Engineer Opportunity ---------------------------------------</p>\n<p>You will be part of the Platform Network engineering team responsible for all connectivity of Auth0. You will play a key engineering role as we evolve our network architecture to meet the demands of enormous growth and support the hundreds of millions of users who rely on us to provide uninterrupted access.</p>\n<p>What you’ll be doing ------------------</p>\n<p>Implement internal and edge networking infrastructure and design solutions that work at global scale and with multi-cloud and multi-region constraints.</p>\n<p>Carry cross-team initiatives from end to end: code reviews, design reviews, operational robustness, security hygiene, etc.</p>\n<p>Design and develop new services, tools, and automation to expose network functionality to other Okta engineering and operations teams.</p>\n<p>Research and implement solutions addressing cross-cutting concerns such as routing, failover, and scaling.</p>\n<p>Participate in the team’s on-call rotation.</p>\n<p>What you’ll bring to the role ---------------------------</p>\n<p>Have 3+ years of software development experience in cloud-native services like API.</p>\n<p>Demonstrable knowledge of TCP/IP, DNS, HTTP, TLS.</p>\n<p>Have DevOps experience using cloud-agnostic, cloud-native technologies.</p>\n<p>Have experience managing infrastructure with Terraform.</p>\n<p>Have experience contributing to Go-based services.</p>\n<p>Have a passion for working on global distributed systems that are highly reliable, maintainable, scalable, and secure.</p>\n<p>Tend to deliver work incrementally to get feedback and iterate over solutions.</p>\n<p>Bring the right attitude to the team: ownership, accountability, and attention to detail.</p>\n<p>And extra credit if you have experience in any of the following!</p>\n<p>A &#39;Product Mindset&#39; toward infrastructure,building internal networking tools that are self-service, well-documented, and easy for application teams to consume.</p>\n<p>Experience with using cloud providers such as AWS or Azure and major content delivery networks.</p>\n<p>Experience implementing and scaling Service Mesh architectures to manage service-to-service communication, observability, and security.</p>\n<p>Knowledge of Istio/Envoy Proxy and the Kubernetes Gateway API to provide flexible, self-service ingress solutions for product teams.</p>\n<p>Experience designing and maintaining multi-cloud networking topologies and hybrid connectivity (Direct Connect, Cloud Interconnect) at scale</p>\n<p>Salary and Benefits -------------------</p>\n<p>The annual base salary range for this position for candidates located in Canada is between $136,000-$187,000 CAD.</p>\n<p>Okta offers equity (where applicable), bonus, and benefits, including health, dental, and vision insurance, RRSP with a match, healthcare spending, telemedicine, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies.</p>\n<p>To learn more about our Total Rewards program, please visit: https://rewards.okta.com/can</p>\n<p>The Okta Experience -------------------</p>\n<p>Supporting Your Well-being</p>\n<p>Driving Social Impact</p>\n<p>Developing Talent and Fostering Connection + Community</p>\n<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate.</p>\n<p>Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>\n<p>Okta is an Equal Opportunity Employer.</p>\n<p>All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>\n<p>We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>\n<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process.</p>\n<p>In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>\n<p>Okta is committed to complying with applicable data privacy and security laws and regulations.</p>\n<p>For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9f2e3373-2d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7653477","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$136,000-$187,000 CAD","x-skills-required":["software development experience in cloud-native services like API","TCP/IP","DNS","HTTP","TLS","DevOps experience using cloud-agnostic, cloud-native technologies","infrastructure with Terraform","Go-based services"],"x-skills-preferred":["Product Mindset toward infrastructure","cloud providers such as AWS or Azure","major content delivery networks","Service Mesh architectures","Istio/Envoy Proxy","Kubernetes Gateway API","multi-cloud networking topologies","hybrid connectivity"],"datePosted":"2026-04-18T15:45:29.712Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development experience in cloud-native services like API, TCP/IP, DNS, HTTP, TLS, DevOps experience using cloud-agnostic, cloud-native technologies, infrastructure with Terraform, Go-based services, Product Mindset toward infrastructure, cloud providers such as AWS or Azure, major content delivery networks, Service Mesh architectures, Istio/Envoy Proxy, Kubernetes Gateway API, multi-cloud networking topologies, hybrid connectivity","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":187000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f64d6ed-6a9"},"title":"Senior Software Engineer","description":"<p>We&#39;re looking for a Senior Software Engineer to join our team. As a Senior Software Engineer, you will build, evolve, and operate backend services at scale for ZoomInfo. You&#39;ll work primarily with Node.js/TypeScript (NestJS preferred), design robust REST/GraphQL APIs, optimize MongoDB/Redis, and deploy on cloud (GCP preferred or AWS) with a strong focus on reliability, performance, security, and cost efficiency.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, implement, and own microservices and REST/GraphQL APIs in Node.js/TypeScript (NestJS preferred)</li>\n<li>Translate product requirements into technical designs; break down work, estimate, and deliver incrementally</li>\n<li>Model data and optimize queries in MongoDB; implement effective caching with Redis (TTL, eviction, hot-key mitigation)</li>\n<li>Ship production-ready code with unit/integration tests; participate in on-call, incident response, and postmortems</li>\n<li>Containerize and deploy via Docker/Kubernetes; automate builds and releases with CI/CD (blue/green or canary)</li>\n<li>Instrument services for logs, metrics, and traces (p95/p99); continuously improve latency, reliability, and cost</li>\n<li>Review code, document designs, and mentor SE II/III engineers; contribute to shared standards and best practices</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of software engineering experience, including 3+ years building backend services in Node.js/TypeScript</li>\n<li>Strong API fundamentals: versioning, pagination, authN/Z (OAuth/OIDC), and secure coding (OWASP)</li>\n<li>Hands-on with NestJS/Express/Fastify; familiarity with microservices patterns and event-driven workflows</li>\n<li>MongoDB expertise (schema design, indexing, basic sharding concepts) and Redis caching patterns</li>\n<li>Cloud experience on GCP (preferred) or AWS; Docker; working knowledge of Kubernetes; CI/CD with GitHub Actions/Jenkins/GitLab</li>\n<li>Observability skills: Datadog/OpenTelemetry/Prometheus/Grafana; confident debugging in production</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Kafka or Pub/Sub; API Gateway/Ingress; feature flags; rate limiting and quotas</li>\n<li>Terraform/Helm; security tooling (SonarQube), dependency hygiene, secret management</li>\n<li>Performance profiling, load testing, and practical cost optimization</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f64d6ed-6a9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8305634002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Node.js","TypeScript","NestJS","MongoDB","Redis","Docker","Kubernetes","CI/CD","API fundamentals","Microservices","Event-driven workflows","Observability"],"x-skills-preferred":["Kafka","Pub/Sub","API Gateway","Ingress","Feature flags","Rate limiting","Quotas","Terraform","Helm","Security tooling","Dependency hygiene","Secret management","Performance profiling","Load testing","Cost optimization"],"datePosted":"2026-04-18T15:45:25.045Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, Karnataka, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Node.js, TypeScript, NestJS, MongoDB, Redis, Docker, Kubernetes, CI/CD, API fundamentals, Microservices, Event-driven workflows, Observability, Kafka, Pub/Sub, API Gateway, Ingress, Feature flags, Rate limiting, Quotas, Terraform, Helm, Security tooling, Dependency hygiene, Secret management, Performance profiling, Load testing, Cost optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf949b3-c66"},"title":"Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer","description":"<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>\n<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>\n<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>\n<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>\n<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>\n<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>\n<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>\n<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>\n<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>\n<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>\n<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>\n<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>\n<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>\n<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>\n<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>\n<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>\n<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>\n<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>\n<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>\n<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>\n<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>\n<li>Advise executive leadership on security architecture, risks, and mitigation.</li>\n<li>Mentor security engineers and developers on secure design and best practices.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>\n<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>\n<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>\n<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>\n<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>\n<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>\n<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>\n<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>\n<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>\n<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>\n<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>\n<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>\n<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>\n<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>\n<li>Ability to influence stakeholders and drive alignment.</li>\n<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf949b3-c66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8207910002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Security Architecture","Identity and Access Management","Zero Trust","Endpoint Security","Data Encryption","Data Exfiltration Prevention","Cluster Security","Offensive Security","Non-human Identity Management","IAM Best Practices","Data Loss Prevention","SaaS Proxy Design and Implementation","Cloud Infrastructure Best Practices","Least Privilege Access for Data Security","Guide internal IT on Databricks’ security and compliance certifications","Support incident response, vulnerability management, threat modeling, and red teaming","Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs","Advise executive leadership on security architecture, risks, and mitigation","Mentor security engineers and developers on secure design and best practices"],"x-skills-preferred":["Terraform","Python","Cursor","Lambda","AWS cloud security","Network security","Data Protection","Appsec","SIEM platforms","XDR","cloud-native threat detection tools","Web application security","OWASP","API security","Secure design and testing","AI-assisted development","Security automation","Scripting/IaC tools","CISSP","CCSP","CEH","AWS Certified Security – Specialty","AWS Certified Solutions Architect – Professional","AWS Certified Advanced Networking – Specialty"],"datePosted":"2026-04-18T15:45:19.828Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_682f5f72-49b"},"title":"Senior Site Reliability Engineer, Edge - TS/SCI","description":"<p>Secure Every Identity, from AI to Human</p>\n<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>\n<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p><strong>About the Team</strong></p>\n<p>At Okta, our motto is &quot;Always On.&quot; Within the Technical Operations (TechOps) team, we live this mission by building the most reliable and performant systems on the planet. We empower organisations to do their most significant work by securely connecting any person, on any device, to the technologies they need.</p>\n<p><strong>The Role</strong></p>\n<p>We are seeking a Senior Site Reliability Engineer (SRE) to lead the evolution of our large-scale production systems. This role is designed for a technical expert who thrives on solving complex problems at scale and lives by the ethic: &quot;If you have to do it twice, automate it.&quot; Based in the Washington, D.C. area, you will ensure our infrastructure maintains uncompromising reliability and performance while supporting critical national security missions in secure, restricted environments.</p>\n<p>Security Requirement: Must be able to obtain and maintain a U.S. security clearance (Secret or Top Secret) to the extent required by U.S. Government contracts.</p>\n<p>The selected candidate may be subject to drug testing to the extent required by U.S. Government contracts.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Infrastructure Leadership: Design, build, and oversee Okta’s production infrastructure, ensuring architectural integrity and peak performance.</li>\n</ul>\n<ul>\n<li>Incident Engineering: Act as a senior escalation point for production incidents, conducting deep-dive root cause analysis and implementing permanent, automated preventive solutions.</li>\n</ul>\n<ul>\n<li>Strategic Automation: Eliminate manual toil by developing sophisticated automation frameworks, evolving monitoring tools, and establishing rigorous technical documentation.</li>\n</ul>\n<ul>\n<li>System Resilience: Optimize a highly available, large-scale environment, ensuring &quot;Always On&quot; service delivery across complex network topologies.</li>\n</ul>\n<ul>\n<li>Mentorship: Provide technical guidance to the engineering organisation, championing SRE best practices and a culture of self-education.</li>\n</ul>\n<p><strong>What You’ll Bring</strong></p>\n<p><strong>Core Requirements</strong></p>\n<ul>\n<li>Clearance: Active TS/SCI with Polygraph.</li>\n</ul>\n<ul>\n<li>Compliance Expertise: Deep professional experience with FedRAMP and DoD IL6 frameworks.</li>\n</ul>\n<ul>\n<li>Education: B.S. in Computer Science or equivalent technical experience.</li>\n</ul>\n<p><strong>Technical Expertise</strong></p>\n<ul>\n<li>Networking &amp; Cloud Architecture: Mastery of AWS networking and security, including Transit Gateways, VPCs, Route Tables, ELBs, and NACLS.</li>\n</ul>\n<ul>\n<li>Infrastructure as Code (IaC): Advanced experience automating enterprise-scale infrastructure via Terraform or CloudFormation.</li>\n</ul>\n<ul>\n<li>Systems &amp; Scripting: Expert-level Linux systems administration with proficiency in Go, Python, Bash, or Ruby.</li>\n</ul>\n<ul>\n<li>Production Support: Proven success managing Docker containers and Java-based stacks (Apache/Tomcat) in high-security production environments.</li>\n</ul>\n<p>Protocol Knowledge: Solid understanding of networking concepts, IP protocols, and multi-cloud infrastructure.</p>\n<p>#LI-TM</p>\n<p>#LI-Hybrid</p>\n<p>P24505</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_682f5f72-49b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7562925","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$159,000-$218,900 USD","x-skills-required":["AWS networking and security","Terraform or CloudFormation","Linux systems administration","Go, Python, Bash, or Ruby","Docker containers and Java-based stacks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:12.178Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS networking and security, Terraform or CloudFormation, Linux systems administration, Go, Python, Bash, or Ruby, Docker containers and Java-based stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":159000,"maxValue":218900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64fb6c63-a4b"},"title":"Senior Product Security Engineer, Red Team","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p>Within the Product Security team, our Red Team delivers robust security assurance for Okta&#39;s products, services, and infrastructure. You will be the team&#39;s dedicated infrastructure and tooling engineer, the first person in this role for a small team of operators. You will work alongside operators but not report through an operator chain; you&#39;ll collaborate as a peer focused on a different discipline.</p>\n<p>We seek a Staff Security Infrastructure Engineer to own the engineering backbone that enables our operations. This is not a traditional operator role but a dedicated infrastructure, tooling, and automation engineering position embedded within the Red Team.</p>\n<p>You will design, build, maintain, and continuously improve the platforms, infrastructure, and custom tooling that our operators depend on to execute engagements. Your work directly enables the team to operate at a higher maturity level: faster infrastructure deployment, more resilient and OPSEC-aware architecture, automated workflows, and reliable custom tooling, freeing operators to focus on the mission.</p>\n<p>Your role will also extend to cultivating stakeholder collaboration and elevating our company’s security posture through strategic engagement and proactive measures. As the team matures, this role can evolve toward platform leadership, custom capability development, or a hybrid operator/engineer path.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Infrastructure Engineering &amp; Automation:</strong></p>\n<ul>\n<li>Own the full lifecycle of red team infrastructure: design, provisioning, configuration, maintenance, and teardown</li>\n<li>Build and maintain Infrastructure-as-Code (IaC) using Terraform (or equivalent) to automate deployment of C2 servers, redirectors, phishing infrastructure, payload-delivery systems, and supporting services.</li>\n<li>Resource and asset lifecycle management through tracking domains, certificates, cloud accounts, recurring expenses, and infrastructure resources; managing acquisition, rotation, and retirement.</li>\n</ul>\n<p><strong>Tooling Development &amp; Maintenance:</strong></p>\n<ul>\n<li>Develop, maintain, and improve custom tools, scripts, and automation to support red team operations (e.g., payload generation pipelines, log aggregation, C2 profile management, infrastructure health checks), providing on-demand infrastructure/tooling support when issues or gaps arise.</li>\n<li>Collaborate closely with operators during engagement planning to understand infrastructure requirements, OPSEC constraints, and operational timelines.</li>\n<li>Building and maintaining a representative test environment for pre-operation validation of tools and tradecraft against a security stack similar to the target.</li>\n<li>Maintaining the team&#39;s source code repository with merge/pull request processes, documentation, and code quality standards.</li>\n<li>Ensuring engagement evidence, infrastructure logs, and operational data are centrally collected and accessible for reporting and after-action reviews.</li>\n<li>Contribute to and maintain metrics that demonstrate infrastructure maturity, operational efficiency, and readiness (e.g., deployment time, rebuild time, infrastructure availability during engagements).</li>\n</ul>\n<p><strong>Security &amp; OPSEC:</strong></p>\n<ul>\n<li>Design infrastructure with OPSEC as a first-class requirement: network segmentation, traffic separation between operations, credential management, and access controls</li>\n<li>Implement and manage secure access to red team infrastructure</li>\n<li>Create and update operational runbooks, infrastructure documentation, and SOPs for the team.</li>\n<li>Maintain clear records of infrastructure ownership and attribution to support deconfliction processes.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of professional experience in infrastructure engineering, DevOps, platform engineering, or a similar role with significant automation responsibilities</li>\n<li>Strong familiarity with Terraform (or equivalent IaC tooling) for multi-cloud infrastructure provisioning and management</li>\n<li>Experience operating in cloud-native, SaaS, or identity-focused environments</li>\n<li>Strong proficiency with configuration management tools (Ansible, or equivalent)</li>\n<li>Proficiency in at least one systems programming or scripting language (Python, Go, Bash) with disciplined development practices (version control, code review, testing, documentation)</li>\n<li>Solid understanding of Linux systems administration, networking fundamentals (DNS, HTTP/S, TCP/IP, proxying, TLS), and cloud platforms (AWS, GCP, or Azure)</li>\n<li>Understanding of OPSEC principles as they apply to offensive infrastructure , you know why redirector chains, domain categorization, traffic separation, and certificate management matter.</li>\n</ul>\n<p><strong>Desired Qualifications</strong></p>\n<ul>\n<li>Experience building and maintaining CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, or similar)</li>\n<li>Familiarity with containerization and orchestration (Docker, Kubernetes) as applicable to tooling and lab environments</li>\n<li>Familiarity with C2 frameworks (Cobalt Strike, Mythic, Sliver, or similar) from an infrastructure and deployment perspective , you don&#39;t need to operate them, but you need to understand what operators need from the infrastructure</li>\n<li>Familiarity with detection evasion concepts as they relate to infrastructure (e.g., traffic shaping, hosting provider reputation, certificate transparency)</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Working knowledge of Blue Team operations and related technologies</li>\n<li>Experience with security tool development (implant development, payload engineering, evasion tooling) , this role can grow in that direction</li>\n<li>Familiarity with Red Team maturity models and how infrastructure/tooling capabilities map to organisational maturity</li>\n</ul>\n<p>Note: This is not an operator role. You will not be the person running hands-on-keyboard engagements as your primary function. While you may participate in operations to understand requirements or provide support, your core mission is ensuring the team&#39;s infrastructure, workflows, tooling, and automation are reliable, repeatable, and mature. You are the engineering foundation the operators build on.</p>\n<p>#LI-TM #LI-Hybrid (P22302_3403905)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64fb6c63-a4b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7773769","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,000-$157,300 USD","x-skills-required":["Terraform","Infrastructure-as-Code","Linux systems administration","Networking fundamentals","Cloud platforms","Configuration management tools","Systems programming or scripting language","OPSEC principles"],"x-skills-preferred":["CI/CD pipelines","Containerization and orchestration","C2 frameworks","Detection evasion concepts"],"datePosted":"2026-04-18T15:44:54.409Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, Infrastructure-as-Code, Linux systems administration, Networking fundamentals, Cloud platforms, Configuration management tools, Systems programming or scripting language, OPSEC principles, CI/CD pipelines, Containerization and orchestration, C2 frameworks, Detection evasion concepts","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114000,"maxValue":157300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_81e928a2-c9f"},"title":"Senior Site Reliability Engineer (Auth0)","description":"<p>Secure Every Identity</p>\n<p>We are looking for a Senior Site Reliability Engineer to join our SRE team based in Europe. As a Senior Site Reliability Engineer, you&#39;ll ensure our production systems are not only operational but also resilient, scalable, and ready for exponential growth.</p>\n<p>This isn&#39;t just about keeping the lights on; it&#39;s about directly contributing to the platform&#39;s core resiliency and robustness. You&#39;ll be a hands-on builder, crafting solutions that make our system more reliable by design.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Design and build custom software in Go to enhance the platform&#39;s reliability, resiliency, and redundancy.</li>\n<li>Partner with engineering teams to embed reliability principles, improving the availability, performance, and observability of our services.</li>\n<li>Use your deep understanding of infrastructure and observability principles to identify opportunities for improvement within the product and implement solutions.</li>\n<li>Contribute to our on-call rotation, providing rapid, effective response to critical incidents and using your expertise to troubleshoot, mitigate or accurately escalate production issues.</li>\n<li>Develop and refine our SRE tooling and processes, focusing on automation and operational efficiency.</li>\n<li>Define, document, and champion reliability best practices across the organisation.</li>\n</ul>\n<p>What you&#39;ll need to be successful</p>\n<p>This role requires a unique blend of a software engineer&#39;s mindset and operational expertise. You&#39;ll thrive in this role if you have:</p>\n<ul>\n<li>A proactive and systematic approach to problem-solving, with a high degree of ownership.</li>\n<li>Proven experience in a production environment supporting large-scale, mission-critical applications with a high degree of autonomy.</li>\n<li>Proficiency in at least one programming language, with a preference for Go. You should be comfortable writing custom applications, not just scripts.</li>\n<li>Experience with infrastructure as code (Terraform), container orchestration (Kubernetes, Docker) and GitOps (ArgoCD).</li>\n<li>Demonstrable expertise in a major cloud provider (Azure, AWS, or GCP).</li>\n<li>A strong grasp of microservices architecture, databases (SQL, NoSQL), and networking fundamentals, so you can understand how custom code can solve platform-level issues.</li>\n<li>An understanding of core SRE principles, including SLIs, SLOs, and error budgets.</li>\n<li>Experience in an on-call rotation for a 24/7 cloud-based environment.</li>\n<li>Exceptional communication and collaboration skills, with a proven ability to work effectively in a remote, distributed team, where tasks may be self-driven.</li>\n</ul>\n<p>The Okta Experience</p>\n<ul>\n<li>Supporting Your Well-Being</li>\n<li>Driving Social Impact</li>\n<li>Developing Talent and Fostering Connection + Community</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_81e928a2-c9f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7418982","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Terraform","Kubernetes","Docker","GitOps","Cloud provider (Azure, AWS, or GCP)","Microservices architecture","Databases (SQL, NoSQL)","Networking fundamentals","Core SRE principles (SLIs, SLOs, error budgets)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:50.552Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Barcelona, Spain"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Terraform, Kubernetes, Docker, GitOps, Cloud provider (Azure, AWS, or GCP), Microservices architecture, Databases (SQL, NoSQL), Networking fundamentals, Core SRE principles (SLIs, SLOs, error budgets)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a326112-c31"},"title":"Professional Services, Technical Architect - West","description":"<p>As a Professional Services Technical Architect at GitLab, you&#39;ll lead the technical direction of customer engagements from early scoping and discovery through delivery. You&#39;ll design high-level architectures and implementation plans for GitLab infrastructure and functionality, and ensure deliverables align with customer requirements and the Statement of Work (SOW).</p>\n<p>You&#39;ll coordinate and guide implementation work across GitLab Professional Services and partner consultants, support customers deploying GitLab in cloud and on-premises environments using tools like Terraform, Ansible, and the GitLab Environment Toolkit, and perform migrations to GitLab using Congregate.</p>\n<p>You&#39;ll provide DevOps and DevSecOps consulting and best practices, contribute reusable collateral such as documentation, delivery kits, and training materials, and share product release updates to help the team deliver consistent outcomes.</p>\n<p>This role combines deep technical expertise with customer-facing leadership in GitLab&#39;s remote, asynchronous, and values-driven environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Designing and delivering GitLab reference architecture implementations for both self-managed and cloud environments, using infrastructure as code practices</li>\n<li>Leading source code management and CI/CD migrations to GitLab, including large-scale enterprise moves using Congregate</li>\n<li>Building repeatable delivery kits, documentation, and enablement materials that help Professional Services and partners deploy and adopt GitLab best practices</li>\n<li>Lead the full technical delivery lifecycle for GitLab Professional Services engagements, from early scoping and technical discovery through implementation and handoff.</li>\n<li>Produce high-level and detailed technical designs for GitLab infrastructure and functionality, ensuring deliverables align with customer requirements and expectations.</li>\n<li>Evaluate and communicate scalability and security considerations (including compliance constraints) in proposed GitLab reference architectures and implementation plans.</li>\n<li>Deploy and configure GitLab in customer environments, including on-premises and major cloud providers, using Terraform, Ansible, and the GitLab Environment Toolkit (GET) aligned to reference architectures.</li>\n<li>Plan and execute source system migrations to GitLab using Congregate, partnering closely with customer stakeholders to reduce risk, protect data integrity, and minimize downtime.</li>\n<li>Provide DevOps and DevSecOps consulting and best-practice guidance, including advising on internal frameworks such as the Delivery Governance Framework (DGF) and GitLab Flow.</li>\n<li>Coordinate and oversee implementation work across GitLab team members, partners, and customer points of contact (POCs), ensuring clear asynchronous communication and decision capture, effective execution, and high-quality outcomes.</li>\n<li>Mentor Professional Services and partner consultants by contributing documentation, delivery kits, and training materials, and by leading enablement sessions on how to deliver and position service offerings.</li>\n<li>Maintain and improve delivery automation assets with an emphasis on code cleanliness, maintainability, and appropriate unit/integration testing.</li>\n<li>Support scoping and Statement of Work (SOW) creation with Professional Services Engagement Managers, stay current on monthly GitLab releases, and help Regional Delivery Managers with technical vetting and staffing assessments while maintaining a 55% billable utilization.</li>\n<li>Review and provide input to Professional Services training materials and presentations</li>\n<li>Develop case studies, presentations, design documentation, and best-practice methodologies</li>\n<li>Work closely with customer project teams to ensure accurate task-level articulation of work required</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Strong written and verbal communication skills, including the ability to lead technical discussions with customers and partners and communicate risks/trade-offs clearly in an asynchronous environment.</li>\n<li>Prefer San Francisco, Bay Area, PST/MST time zones.</li>\n<li>Up to 50% travel at times.</li>\n<li>Demonstrated experience delivering two or more of the following consulting services: source code management migration, cloud architecture, DevOps engineering, or continuous integration and continuous delivery (CI/CD) consulting services.</li>\n<li>Enterprise software development experience, with the ability to translate requirements into clear technical designs and implementation plans.</li>\n<li>Progressive DevOps platform experience, including designing and implementing reliable, scalable systems with clear performance and security trade-offs.</li>\n<li>Hands-on experience deploying and managing infrastructure in cloud providers and on-premises environments, including using tools such as Terraform and Ansible.</li>\n<li>Ability to write clean, maintainable automation/integration code (e.g., Terraform modules, Ansible roles, scripts) and validate changes with appropriate testing and code review.</li>\n<li>Experience performing migrations to GitLab, including using Congregate or similar migration tooling.</li>\n<li>Working knowledge of data consistency and integrity concepts (e.g., ACID properties) and how they impact migration design and performance trade-offs.</li>\n<li>Strong problem-solving, decision-making, organizational, and time management skills, with the ability to manage multiple priorities with minimal supervision.</li>\n<li>Comfort working in a remote, asynchronous environment, using documented decisions (e.g., issues, proposals, and runbooks) to keep work unblocked across time zones while collaborating effectively across GitLab team members, partners, and customer stakeholders.</li>\n<li>Bachelor&#39;s Degree in Information Technology, Computer Science, or other advanced technical degree, or equivalent experience</li>\n</ul>\n<p>About the Team: The Professional Services Technical Architect is part of GitLab&#39;s Professional Services organization. We partner with customers who are transitioning to GitLab, expanding how they use an existing GitLab installation, or planning complex upgrades to their infrastructure and processes. Our team brings a diverse set of skills across GitLab deployment, maintenance, and day-to-day usage, along with deep technical knowledge of adjacent tools and platforms. Working closely with Engagement Managers, partner consultants, and customer stakeholders, we focus on delivering high-quality outcomes, sharing best practices, and helping customers realize faster time to value from their GitLab investment. We operate in GitLab&#39;s remote, asynchronous, and values-driven environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a326112-c31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8452994002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Terraform","Ansible","GitLab Environment Toolkit","Congregate","DevOps","DevSecOps","Cloud architecture","Source code management","Continuous integration and continuous delivery","Infrastructure as code","Scalability","Security","Compliance","Data consistency and integrity","ACID properties"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:32.604Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, Ansible, GitLab Environment Toolkit, Congregate, DevOps, DevSecOps, Cloud architecture, Source code management, Continuous integration and continuous delivery, Infrastructure as code, Scalability, Security, Compliance, Data consistency and integrity, ACID properties"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7203380b-a7c"},"title":"Software Engineer (L3) Infrastructure","description":"<p>We are seeking a Software Engineer (L3) Infrastructure to join our Developer Platform Experience team under Platform Engineering. As a key member of our team, you will help users interact with Twilio&#39;s internal developer platform, manage our software taxonomy and cloud infrastructure inventory, accelerate developer productivity via self-service tools, and drive adoption of engineering best practices throughout the company.</p>\n<p>In this role, you will develop, test, and deploy backend, frontend, and client-side applications for internal use at Twilio. You will collaborate with teammates and guest contributors via peer reviews, planning exercises, and pair programming. You will also mentor junior engineers as necessary, write tickets, testing plans, and runbooks for the team, as well as internal documentation for users.</p>\n<p>You will support internal users and ensure system uptime by participating in a 24x7 weekly on-call rotation. You will continuously improve Twilio&#39;s internal developer platform interfaces, local development tools, and platform onboarding processes. You will independently own medium-sized features, authoring specifications and designs for features of moderate complexity.</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7203380b-a7c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7767260","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"CAD $132,640.00 - CAD $165,800.00","x-skills-required":["Typescript","Python","Go","Terraform","Bash","AWS cloud environment","Relational database concepts and operations","5+ years of full-time job experience in a software engineering role"],"x-skills-preferred":["Prior experience working with a platform engineering focus in a software engineering organization","Strong opinions on developer experience and local development best practices","Familiarity with front-end web application development and frameworks such as React, Angular, or Vue","Familiarity with internal developer platform frameworks such as Backstage, OpsLevel, Cortex, or Battlestar","Fluency with AI platforms such as Claude, ChatGPT, and/or Copilot to accelerate software development"],"datePosted":"2026-04-18T15:44:05.160Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Typescript, Python, Go, Terraform, Bash, AWS cloud environment, Relational database concepts and operations, 5+ years of full-time job experience in a software engineering role, Prior experience working with a platform engineering focus in a software engineering organization, Strong opinions on developer experience and local development best practices, Familiarity with front-end web application development and frameworks such as React, Angular, or Vue, Familiarity with internal developer platform frameworks such as Backstage, OpsLevel, Cortex, or Battlestar, Fluency with AI platforms such as Claude, ChatGPT, and/or Copilot to accelerate software development","baseSalary":{"@type":"MonetaryAmount","currency":"CAD","value":{"@type":"QuantitativeValue","minValue":132640,"maxValue":165800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93c1356c-a95"},"title":"Principal Software Engineer, Web Data - Tech Lead","description":"<p>We&#39;re looking for an exceptional Principal Software Engineer to serve as the de facto Technical Lead for our Web Data Acquisition (WDA) team. This is a highly visible, hands-on technical leadership role where you&#39;ll own the architectural direction for crawling systems, evolve and unify crawling platforms into a best-in-class stack, and elevate a high-performing engineering team.</p>\n<p>As a Principal Software Engineer, you&#39;ll solve complex distributed systems challenges, build modular tooling that accelerates delivery, and set the standard for observability and operational excellence. You&#39;ll have a dedicated manager handling all HR and administrative responsibilities. A product manager connects business needs with technical work. Your focus is 100% technical leadership, mentorship, and hands-on execution.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Technical Leadership &amp; System Design: Proven experience building web crawling or large-scale data systems from scratch. Strong architectural skills designing scalable, fault-tolerant distributed systems. Track record leading complex technical initiatives and driving architecture direction for teams.</li>\n</ul>\n<ul>\n<li>Data Engineering Expertise: Deep background in large-scale data engineering (terabytes daily). Hands-on experience with cloud data warehouses (BigQuery, Snowflake). Experience with Apache Kafka, Kubernetes (GKE/EKS), and orchestration tools (Airflow).</li>\n</ul>\n<ul>\n<li>Web Crawling &amp; Data Extraction: Deep expertise in web crawling technologies and advanced scraping (Scrapy or similar). Experience extracting structured/unstructured web data and SERP extraction. Knowledge of proxy infrastructure management, anti-bot detection, and ethical crawling.</li>\n</ul>\n<ul>\n<li>Leadership &amp; Team Development: Experience mentoring engineers at all levels and fostering collaborative culture. Strong ability to influence technical direction and establish best practices. Track record hiring, coaching, and developing senior engineers.</li>\n</ul>\n<p>Ideal Candidate Profile:</p>\n<ul>\n<li>10+ years software engineering experience. 5+ years focused on data engineering. 3+ years in senior/principal-level technical leadership.</li>\n</ul>\n<ul>\n<li>Strong CS fundamentals (algorithms, data structures, distributed systems). Self-starter who thrives in fast-paced environments.</li>\n</ul>\n<p>Core Technical Stack:</p>\n<ul>\n<li>Python &amp; Java</li>\n<li>Apache Kafka</li>\n<li>GCP (BigQuery, GKE, Vertex AI)</li>\n<li>Snowflake &amp; Starburst/Trino</li>\n<li>Terraform</li>\n<li>Scrapy / Web Scraping Frameworks</li>\n<li>Proxy Management Systems</li>\n<li>Distributed Systems &amp; Kubernetes</li>\n<li>Apache Airflow</li>\n<li>Large-Scale ETL Pipelines</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93c1356c-a95","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8378092002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,800-$257,400 USD","x-skills-required":["Python","Java","Apache Kafka","Kubernetes","GCP","Snowflake","Terraform","Scrapy","Proxy Management Systems","Distributed Systems","Apache Airflow","Large-Scale ETL Pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:50.896Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Apache Kafka, Kubernetes, GCP, Snowflake, Terraform, Scrapy, Proxy Management Systems, Distributed Systems, Apache Airflow, Large-Scale ETL Pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163800,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9238107d-204"},"title":"Software Architect, Reliability Engineering","description":"<p>Join the team as Twilio&#39;s next Reliability Architect.</p>\n<p>As an Architect in SRE, you will drive the technical strategy, vision and outcomes for Twilio&#39;s Reliability Engineering organisation. You will define and lead solutions and initiatives that ensure Twilio products are reliable worldwide, and you will define standards and guide engineering teams on best practices for designing, building, and operating resilient systems.</p>\n<p>This role is pivotal to Twilio&#39;s commitment to operational excellence, scalability, and pragmatic, large-scale systems design in the cloud.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with senior technical leaders across Twilio to set and communicate the reliability strategy, translating business goals into measurable outcomes.</li>\n<li>Influence company-wide architectural decisions while balancing long-term vision with near-term and compliance needs.</li>\n<li>Lead the design, implementation, and operation of scalable solutions and paved roads that enable reliable, high-traffic services;</li>\n<li>Influence company-wide architectural decisions to focus on availability, performance, resilience, and cost efficiency using Kubernetes, AWS, Terraform, and modern observability.</li>\n<li>Ensure integrity and quality across the service lifecycle; design fault-tolerant architectures, incident response, disaster recovery, and capacity/cost management.</li>\n<li>Collaborate with product and cross-functional teams to identify reliability risks and convert them into actionable designs, programs, and tooling.</li>\n<li>Establish and champion reliability practices and drive systemic improvements.</li>\n<li>Mentor and grow engineers and technical leaders</li>\n<li>Track and apply emerging SRE, cloud, and large-scale systems best practices; introduce pragmatic innovations that improve reliability at scale.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>15+ years of experience in Reliability Engineering, Software Engineering, DevOps roles with a focus on infrastructure, backend systems, and reliability, including as a principal/architect.</li>\n<li>Strong experience in driving strategic technical decisions and defining long-term technical vision.</li>\n<li>In-depth understanding of the role of Reliability Engineering in a large and diverse SaaS organisation.</li>\n<li>Experience driving cross-org technical architecture outcomes.</li>\n<li>Knowledge of cloud architecture, devops practices, and large-scale systems design with microservices.</li>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field (or equivalent experience).</li>\n<li>Strong production experience, including operational management, scaling, partitioning strategies, and tuning for performance and reliability in high-scale environments.</li>\n<li>Hands-on experience with Kubernetes (e.g., EKS), deploying and managing stateful services, and cloud services like AWS.</li>\n<li>Proficiency in infrastructure-as-code tools such as Terraform or CloudFormation for automating infrastructure.</li>\n<li>Expertise in observability tools (e.g., Prometheus, Grafana, Datadog) for monitoring distributed systems and setting up alerting.</li>\n<li>Proficient in at least one programming language (e.g., Go, Python, Java) for building automation and tooling.</li>\n<li>Experience designing incident response processes, SLOs/SLIs, runbooks, and participating in on-call rotations.</li>\n<li>Experience running cross-functional post-incident reviews and driving improvements.</li>\n<li>Strong understanding of distributed systems principles, including consensus, durability, throughput, and availability tradeoffs.</li>\n<li>Proven track record of leading reliability improvements in data-intensive or mission-critical systems and collaborating with engineering teams.</li>\n<li>Excellent problem-solving, analytical, verbal, and written communication skills, with the ability to work in cross-functional and distributed environments.</li>\n<li>Demonstrated leadership in mentoring teams, influencing decisions, and balancing long-term objectives with short-term needs.</li>\n<li>Ability to influence and build effective working relationships with all levels of the organisation.</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Specific experience owning and operating large AWS footprints.</li>\n<li>Knowledge of Kubernetes architecture and concepts.</li>\n<li>Experience with data technologies like Apache Kafka, AWS MSK, or similar for reliable streaming.</li>\n<li>Passion for building reliable products, with prior projects in high-availability systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9238107d-204","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7658259","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,840.00 - $284,800.00 per year","x-skills-required":["Reliability Engineering","Software Engineering","DevOps","Cloud Architecture","Microservices","Kubernetes","AWS","Terraform","Observability Tools","Programming Languages","Incident Response","Distributed Systems Principles"],"x-skills-preferred":["Apache Kafka","AWS MSK","Kubernetes Architecture","Data Technologies"],"datePosted":"2026-04-18T15:42:56.209Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Reliability Engineering, Software Engineering, DevOps, Cloud Architecture, Microservices, Kubernetes, AWS, Terraform, Observability Tools, Programming Languages, Incident Response, Distributed Systems Principles, Apache Kafka, AWS MSK, Kubernetes Architecture, Data Technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227840,"maxValue":284800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1044456b-79a"},"title":"Staff Software Engineer - Backend","description":"<p>We are obsessed with enabling data teams to solve the world&#39;s toughest problems. As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>\n<p>This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>You will be part of one of the following teams:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1044456b-79a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6779232002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Scala","Java","Apache Spark","Apache Kafka","Cloud APIs (AWS, Azure, CloudFormation, Terraform)","SQL","Software security","Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:26.705Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Apache Spark, Apache Kafka, Cloud APIs (AWS, Azure, CloudFormation, Terraform), SQL, Software security, Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9b8fb427-b59"},"title":"Elastic AI Engineer - Canada (Remote)","description":"<p>We are looking for an innovative Elastic AI Engineer to join our team to build autonomous, enterprise-grounded agents that don&#39;t just answer questions,they complete complex business tasks to accelerate productivity across the entire organization.</p>\n<p>The ideal candidate is an Elastic product expert (including but not limited to Agent Builder and Workflows), using the full power of the Elastic Stack to provide the &#39;brain&#39; and &#39;memory&#39; for our agentic ecosystem.</p>\n<p>As the company behind the popular open-source projects , Elasticsearch, Kibana, Logstash, and Beats , we help people around the world do great things with their data.</p>\n<p>The Elastic family unites employees across 40+ countries into one coherent team, while the broader community spans across over 100 countries.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Agentic Strategy &amp; Design: Invent and implement sophisticated agentic workflows that use reasoning and tools to complete end-to-end business processes.</li>\n</ul>\n<ul>\n<li>Enterprise Grounding: Apply Retrieval Augmented Generation (RAG) and the Elasticsearch Relevance Engine (ESRE) to ensure agents are deeply grounded in enterprise knowledge for high-accuracy task completion.</li>\n</ul>\n<ul>\n<li>AI Model &amp; Tool Integration: Develop and fine-tune LLMs and integrate them with internal APIs and third-party SaaS tools to enable autonomous action.</li>\n</ul>\n<ul>\n<li>Scalable Infrastructure: Firm understanding of cloud-based environments (AWS, Azure, GCP) in order to support the high-concurrency demands of enterprise agents.</li>\n</ul>\n<ul>\n<li>Lifecycle Management: Oversee the training, deployment, and performance optimization of agents, ensuring they remain secure, reliable, and compliant.</li>\n</ul>\n<ul>\n<li>Technical Leadership: Act as a domain expert on the Elastic Stack, making technical recommendations that push the boundaries of AI-driven productivity.</li>\n</ul>\n<ul>\n<li>Documentation: Maintain comprehensive documentation of AI workflows, cloud infrastructure, and deployment processes.</li>\n</ul>\n<ul>\n<li>Security: Implement standards for security and data privacy to protect sensitive information and ensure compliance with relevant regulations.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3-5 years of work experience in a relevant field.</li>\n</ul>\n<ul>\n<li>Minimum 1 year experience building with the Elastic Stack.</li>\n</ul>\n<ul>\n<li>Knowledge of Elasticsearch Relevance Engine (ESRE), Jina AI, and advanced RAG patterns is critical.</li>\n</ul>\n<ul>\n<li>Proven success in delivering independent GenAI projects, specifically those involving autonomous task completion or complex workflow automation.</li>\n</ul>\n<ul>\n<li>Agentic Frameworks: Familiarity with LangGraph, LangChain, and LangSmith for building and debugging multi-agent systems.</li>\n</ul>\n<ul>\n<li>Expertise in Enterprise Agentic &amp; Workflow Platforms: Deep familiarity with leading agentic AI and workflow automation platforms (such as Microsoft Copilot Studio, Salesforce Agentforce, ServiceNow AI Agents).</li>\n</ul>\n<ul>\n<li>Market Trend Integration: Proven ability to apply emerging market trends,such as Multi-Agent Orchestration and Model Context Protocol (MCP),to build high-impact, cost-optimized solutions that scale across the enterprise.</li>\n</ul>\n<ul>\n<li>Programming: Experience with Python or TypeScript for backend logic and agent orchestration.</li>\n</ul>\n<ul>\n<li>Cloud &amp; Orchestration: Familiarity with Kubernetes (Operators/Controllers), Docker, and Terraform for automated deployment.</li>\n</ul>\n<ul>\n<li>Model Expertise: Hands-on experience with LLM providers.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Computer Science or a related engineering field.</li>\n</ul>\n<ul>\n<li>Strong communication skills with the ability to translate business requirements into technical agent architectures.</li>\n</ul>\n<ul>\n<li>A commitment to Ethical AI and responsible development practices.</li>\n</ul>\n<ul>\n<li>Experience with containerization and orchestration (e.g., Docker, Kubernetes).</li>\n</ul>\n<ul>\n<li>Knowledge of DevOps practices for model deployment and automation.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9b8fb427-b59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7792839","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$101,900-$161,200 CAD","x-skills-required":["Elasticsearch Relevance Engine (ESRE)","Jina AI","advanced RAG patterns","LangGraph","LangChain","LangSmith","Microsoft Copilot Studio","Salesforce Agentforce","ServiceNow AI Agents","Multi-Agent Orchestration","Model Context Protocol (MCP)","Python","TypeScript","Kubernetes","Docker","Terraform","LLM providers"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:23.055Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Elasticsearch Relevance Engine (ESRE), Jina AI, advanced RAG patterns, LangGraph, LangChain, LangSmith, Microsoft Copilot Studio, Salesforce Agentforce, ServiceNow AI Agents, Multi-Agent Orchestration, Model Context Protocol (MCP), Python, TypeScript, Kubernetes, Docker, Terraform, LLM providers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":101900,"maxValue":161200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21860f67-527"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritize, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark™, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>Some example teams you can join:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p>Competencies:</p>\n<p>BS/MS/PhD in Computer Science, or a related field 10+ years of production-level experience in one of: Java, Scala, C++, or similar language Comfortable working towards a multi-year vision with incremental deliverables Experience in architecting, developing, deploying, and operating large-scale distributed systems Experience working on a SaaS platform or with Service-Oriented Architectures Good knowledge of SQL Experience with software security and systems that handle sensitive data Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes.</p>\n<p>Pay Range Transparency: The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21860f67-527","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5408888002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:55.276Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a372c4e5-b8f"},"title":"Data Engineer II - Platform Analytics - Kibana Platform - AppEx","description":"<p>We&#39;re looking for a Data Engineer to join our Platform Analytics team. In this role, you&#39;ll help build and maintain scalable data pipelines and analytics solutions that support business, product, and technical use cases across Elastic. You&#39;ll work closely with cross-functional partners to deliver reliable, high-quality data in a fast-moving, distributed environment.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, enhance, and maintain data ingestion and transformation pipelines</li>\n<li>Develop and optimize analytics datasets using BigQuery and dbt</li>\n<li>Support and maintain existing data systems as needed to ensure continuity and data reliability</li>\n<li>Design scalable data models that enable trusted analytics and reporting</li>\n<li>Partner with product managers, analysts, and solution teams to translate ambiguous requirements into effective data solutions</li>\n<li>Monitor data quality and system health to ensure accurate, timely insights</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong experience with SQL and Python</li>\n<li>3+ years of experience in Data Engineering, preferably on Google Cloud Platform (GCP)</li>\n<li>Experience designing and operating production data pipelines at scale</li>\n<li>Good knowledge of architecture and design (patterns, reliability, scalability, quality) of complex systems</li>\n<li>Familiarity with BigQuery and modern ELT tools (e.g., dbt)</li>\n<li>Experience with AI tools and workflows</li>\n<li>Strong analytical and problem-solving skills</li>\n<li>Clear written and verbal communication skills</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience with Buildkite and Terraform</li>\n<li>Experience with Dataflow on GCP</li>\n<li>Experience with Elasticsearch</li>\n<li>Experience with Kubernetes</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do.</p>\n<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>\n<ul>\n<li>Competitive pay based on the work you do here and not your previous salary</li>\n<li>Health coverage for you and your family in many locations</li>\n<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>\n<li>Generous number of vacation days each year</li>\n<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>\n<li>Up to 40 hours each year to use toward volunteer projects you love</li>\n<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a372c4e5-b8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7614519","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","BigQuery","dbt","Google Cloud Platform (GCP)","AI tools and workflows"],"x-skills-preferred":["Buildkite","Terraform","Dataflow on GCP","Elasticsearch","Kubernetes"],"datePosted":"2026-04-18T15:41:36.319Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Greece"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, BigQuery, dbt, Google Cloud Platform (GCP), AI tools and workflows, Buildkite, Terraform, Dataflow on GCP, Elasticsearch, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_abaa6feb-362"},"title":"Staff Security Software Engineer","description":"<p>We are seeking a Staff Security Software Engineer to join our Security Continuous Monitoring team. As a member of this team, you will help build and scale Databricks Security systems built on top of the Databricks platform. Your responsibilities will include designing, testing, and implementing data pipelines to assess security configurations of Cloud, SaaS, and on-premise tooling.</p>\n<p>You will also design and deploy robust supporting security tools for managing and assessing security state, integrate with third-party applications, and interact with cloud APIs (AWS, Azure, GCP, Terraform). Additionally, you will plan and lead end-to-end projects supporting data collection and integration with vulnerability and threat detection efforts.</p>\n<p>To succeed in this role, you will need 8+ years of software engineering experience, with 4+ years specifically in security-related engineering. You should have experience with Python, Git/GitHub, and CI/CD automation, and Terraform. Expertise in securing at least one major cloud environment (AWS, Azure, GCP) is also required. Experience with software security and systems that handle sensitive data, as well as data correlation engine, is preferred.</p>\n<p>As a leader on our team, you will be expected to mentor peers, drive strategic initiatives, and influence the organization&#39;s security direction. You should have excellent communication skills, with the ability to collaborate effectively across teams and present complex ideas clearly to stakeholders at all levels.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_abaa6feb-362","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7932280002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Git/GitHub","CI/CD automation","Terraform","Cloud security","Software security","Data correlation engine"],"x-skills-preferred":["FedRAMP Moderate/High experience"],"datePosted":"2026-04-18T15:41:07.654Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Git/GitHub, CI/CD automation, Terraform, Cloud security, Software security, Data correlation engine, FedRAMP Moderate/High experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1b773e5c-b51"},"title":"IT Systems Engineer, Corporate Systems & Infrastructure","description":"<p>About the role ---------------- The Corporate Infrastructure team builds and operates the platform layer the rest of IT Engineering runs on , cloud infrastructure hosting our internal services, the CI/CD that ships IT&#39;s own code, the observability stack across the corporate environment, and the cross-system automation that wires together tools never designed to talk to each other.</p>\n<p>You&#39;ll build deployment pipelines and internal tooling that let IT Engineering ship like a product team. You&#39;ll define SLOs for corporate services, build the monitoring to know when we&#39;re missing them, and run on-call for the things you deploy. You&#39;ll partner with our network and AV engineers as their infrastructure counterpart , automating physical-world systems, building the telemetry that tells us an office is degraded before someone files a ticket. The scope is broad and the team is deliberately small, which means you&#39;ll need depth across cloud, CI, and observability, strong judgment about where to invest, and a bias toward infrastructure-as-code over heroic manual fixes.</p>\n<p>Responsibilities ---------------</p>\n<ul>\n<li>Build and operate the cloud infrastructure that hosts IT&#39;s internal services</li>\n<li>Design CI/CD pipelines that let IT Engineering ship through code review and automated testing</li>\n<li>Own observability for corporate infrastructure , monitoring, alerting, dashboards, and SLOs</li>\n<li>Write cross-system automation to integrate third-party systems and internal services</li>\n<li>Partner with network, audiovisual, and physical security to deliver robust infrastructure solutions</li>\n<li>Build internal tools , CLIs, bots, dashboards , that make other IT engineers faster</li>\n<li>Run on-call for corporate infrastructure with post-incident reviews that drive durable fixes</li>\n<li>Deploy infrastructure as code</li>\n</ul>\n<p>Requirements ------------</p>\n<ul>\n<li>8+ years building secure IT systems in complex environments</li>\n<li>Excel at solving ambiguous problems with multiple stakeholders</li>\n<li>Communicate technical concepts clearly to any audience</li>\n<li>View IT Engineering as requiring product engineering rigor</li>\n<li>Successfully deliver complex projects from conception to production</li>\n<li>Write clear documentation as a natural part of your workflow</li>\n<li>Have shipped Infrastructure as Code in production , Terraform or similar, with modules and state you maintained</li>\n<li>Have run services with SLOs, on-call rotations, and post-incident reviews</li>\n<li>Have built internal platforms or tooling that other engineers depend on</li>\n</ul>\n<p>Strong candidates may also -------------------------------</p>\n<ul>\n<li>Have transformed traditional IT operations into engineering-driven organizations</li>\n<li>Have built strong partnerships with Security and Engineering teams</li>\n<li>Practice modern development methods (code reviews, testing, CI/CD)</li>\n<li>Work effectively in distributed teams</li>\n<li>Have experience with ECS, Kubernetes or other container orchestration for internal services</li>\n<li>Have automated physical-world infrastructure deployment (e.g., network configuration, office technology, physical security systems)</li>\n<li>Have worked with enterprise integration or workflow automation platforms (e.g., Workato, n8n, Tines, or equivalents)</li>\n</ul>\n<p>Technical Skills ----------------</p>\n<ul>\n<li>Python, golang, etc</li>\n<li>Terraform and Infrastructure as Code</li>\n<li>Cloud platforms (AWS, GCP, Azure)</li>\n<li>CI/CD pipeline design</li>\n<li>Observability tooling (e.g., Prometheus, Grafana, Datadog, Honeycomb, or equivalent)</li>\n<li>Linux systems administration</li>\n<li>Strong networking skills</li>\n<li>Configuration management</li>\n</ul>\n<p>Experience Level: senior Employment Type: full-time Workplace Type: remote Category: Engineering Industry: Technology Salary Range: $275,000-$325,000 USD Required Skills:</p>\n<ul>\n<li>Python</li>\n<li>Terraform</li>\n<li>Cloud platforms</li>\n<li>CI/CD pipeline design</li>\n<li>Observability tooling</li>\n<li>Linux systems administration</li>\n<li>Strong networking skills</li>\n<li>Configuration management</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>golang</li>\n<li>ECS</li>\n<li>Kubernetes</li>\n<li>Enterprise integration or workflow automation platforms</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1b773e5c-b51","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4887952008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$275,000-$325,000 USD","x-skills-required":["Python","Terraform","Cloud platforms","CI/CD pipeline design","Observability tooling","Linux systems administration","Strong networking skills","Configuration management"],"x-skills-preferred":["golang","ECS","Kubernetes","Enterprise integration or workflow automation platforms"],"datePosted":"2026-04-18T15:40:30.321Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Terraform, Cloud platforms, CI/CD pipeline design, Observability tooling, Linux systems administration, Strong networking skills, Configuration management, golang, ECS, Kubernetes, Enterprise integration or workflow automation platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":275000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61c7772c-bed"},"title":"Senior Product Manager, Cloudflare WAN","description":"<p>We are looking for an experienced, empathetic, and highly technical Senior Product Manager to join our Network Services team. The Cloudflare WAN PM will be entrepreneurial-minded and thrive in a fast-paced and goal-driven environment. They will have outstanding communication and collaboration skills and will be able to work with a diverse group, get consensus, and drive the product forward.</p>\n<p>The ideal candidate will have several years of experience working with enterprise, NaaS, or security technologies. They will have a passion for building NaaS products and want to solve the problems of performance, security, and reliability of the Internet.</p>\n<p>Role Responsibilities:</p>\n<ul>\n<li>Owning the product vision for your area. Ensure that it aligns with the overall product and company vision.</li>\n<li>Representing the customer. Be the champion and voice of customers. Build meaningful, personal customer relationships. Bring the customer&#39;s voice into the creation process.</li>\n<li>Managing the roadmap. Make tough tactical prioritization decisions while helping the company think long-term. Build trust with stakeholders by maintaining an understandable, accurate roadmap.</li>\n<li>Authoring use cases and prioritizing requirements. Translate market observations and customer feedback into a prioritized product backlog.</li>\n<li>Collaborating across teams. We win or lose as a team. Product managers play a critical role in creating alignment between engineering teams and other stakeholders. A collaborative attitude is essential to the job.</li>\n<li>Measuring success. Own the measures used to define success for your product. Success measures must be defined at the inception of a product and tracked throughout its lifecycle. Make measures visible to all stakeholders and interpret them into actionable conclusions and new hypotheses.</li>\n</ul>\n<p>Must-Have Skills:</p>\n<ul>\n<li>5-10 years working as a Product Manager or Technical Product Manager shipping enterprise class software, or relevant experience</li>\n<li>Strong technical understanding of L3-L7 networking including routing protocols, common network architectures and secure connectivity within cloud environments (AWS, Azure, GCP)</li>\n<li>Experience working alongside multiple disciplines (ex. Design, engineering, support, sales) and decision makers to ship software</li>\n<li>Excellent customer facing skills (empathy, problem solving, clear and succinct communicator)</li>\n<li>Ability to work with common software and networking tools like Postman or curl to interact with RESTful Web APIs, Terraform for automated provisioning of cloud networking resources</li>\n<li>Experience with up-to-date AI tools (agents and IDEs) for vibecoding and prototyping</li>\n</ul>\n<p>Nice-to-Have Skills:</p>\n<ul>\n<li>Experience with modern SASE or SSE solutions (either building or implementing)</li>\n<li>Experience with large-scale distributed systems and cloud-native architectures</li>\n<li>Active participation in industry conferences, publications, or open-source projects related to networking or security</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61c7772c-bed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7600080","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$156,000 - $215,000","x-skills-required":["Product Management","Networking","Cloud Computing","Security","Customer Facing","Communication","Collaboration","Technical Understanding","AI Tools","RESTful Web APIs","Terraform"],"x-skills-preferred":["SASE","SSE","Distributed Systems","Cloud-Native Architectures","Industry Conferences","Publications","Open-Source Projects"],"datePosted":"2026-04-18T15:40:28.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Product Management, Networking, Cloud Computing, Security, Customer Facing, Communication, Collaboration, Technical Understanding, AI Tools, RESTful Web APIs, Terraform, SASE, SSE, Distributed Systems, Cloud-Native Architectures, Industry Conferences, Publications, Open-Source Projects","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":156000,"maxValue":215000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04884ef5-f9e"},"title":"Software Engineer, Compute (8+ YOE)","description":"<p>We&#39;re looking for an experienced software engineer to help lead the next phase of platform maturity in how we run Kubernetes at Airtable. As a member of the Compute Platform team, you&#39;ll be responsible for building and evolving the infrastructure that powers Airtable&#39;s services at scale.</p>\n<p>Your primary focus will be on designing, implementing, and scaling core Kubernetes platform capabilities used across ~70 clusters, spread across multiple environments. You&#39;ll also lead foundational modernization efforts, such as migrating to a new CNI plugin to overhaul IP security rule management across clusters and regions.</p>\n<p>In addition to your technical expertise, you&#39;ll collaborate closely with product and security teams to power a rapidly growing enterprise business. You&#39;ll spend roughly 70% of your time in hands-on engineering and 30% in design reviews, mentorship, and cross-team collaboration.</p>\n<p>To succeed in this role, you&#39;ll need 8+ years of software engineering experience, with deep expertise building and operating a Kubernetes-based internal service platform. You&#39;ll also need a strong understanding of Kubernetes internals, including controllers/operators, CRDs, networking, and cluster architecture.</p>\n<p>If you&#39;re excited about building internal platforms, shaping infrastructure strategy, and partnering closely with product and security teams, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04884ef5-f9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8442397002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Typescript","Golang","Cloud Native Infrastructure","CI/CD","Infrastructure as Code","Terraform","CloudFormation","OpenTofu","Pulumi"],"x-skills-preferred":["AWS infrastructure","EKS","Spinnaker","ArgoCD","Flux","Jenkins"],"datePosted":"2026-04-18T15:39:46.997Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Typescript, Golang, Cloud Native Infrastructure, CI/CD, Infrastructure as Code, Terraform, CloudFormation, OpenTofu, Pulumi, AWS infrastructure, EKS, Spinnaker, ArgoCD, Flux, Jenkins"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf9dc88-fbe"},"title":"Infrastructure Security Engineer","description":"<p>We are seeking a talented and motivated Cloud/Infrastructure Security Engineer to join our security team.</p>\n<p>In this role, you will design, implement, and maintain secure cloud infrastructure and ensure the integrity of our cloud-native applications.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement secure cloud architectures across multiple cloud platforms (e.g., AWS, GCP, Azure)</li>\n<li>Develop and maintain Infrastructure as Code (IaC) templates with embedded security controls</li>\n<li>Conduct regular security assessments and audits of cloud infrastructure and services</li>\n<li>Implement and manage cloud security tools and services (e.g., CSPM, CWPP, CASB)</li>\n<li>Collaborate with development teams to ensure security best practices are integrated into CI/CD pipelines</li>\n<li>Monitor and respond to security events and incidents in cloud environments</li>\n<li>Develop and maintain cloud security policies, standards, and procedures</li>\n<li>Stay current with emerging cloud security threats and mitigation strategies</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, or a related field</li>\n<li>3-5 years of experience in cloud security or related roles</li>\n<li>Strong understanding of cloud security principles, compliance frameworks, and best practices</li>\n<li>Proficiency in at least one cloud platform (AWS, GCP, or Azure) and associated security services</li>\n<li>Experience with Infrastructure as Code tools (e.g., Terraform, CloudFormation)</li>\n<li>Familiarity with containerization technologies and their security implications</li>\n<li>Knowledge of network security concepts and protocols</li>\n<li>Experience with scripting languages (e.g., Python, Bash) for automation and tool development</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Relevant security certifications (e.g., CCSP, CSSK, AWS Security Specialty)</li>\n<li>Experience with multi-cloud environments and cloud-to-cloud security</li>\n<li>Knowledge of DevSecOps practices and tools</li>\n<li>Experience with Kubernetes and container security</li>\n<li>Experience in building custom cloud security tools or integrations</li>\n<li>Interest in leveraging AI for cloud security monitoring and automation</li>\n<li>Contributions to open-source cloud security projects</li>\n<li>Experience with securing AI/ML workloads in cloud environments</li>\n</ul>\n<p>Compensation and Benefits:</p>\n<p>$200,000 - $340,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf9dc88-fbe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090998007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$200,000 - $340,000 USD","x-skills-required":["Cloud security principles","Compliance frameworks","Best practices","Cloud platform (AWS, GCP, or Azure)","Infrastructure as Code tools (Terraform, CloudFormation)"],"x-skills-preferred":["Relevant security certifications (CCSP, CSSK, AWS Security Specialty)","Multi-cloud environments and cloud-to-cloud security","DevSecOps practices and tools","Kubernetes and container security","Building custom cloud security tools or integrations"],"datePosted":"2026-04-18T15:23:29.833Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud security principles, Compliance frameworks, Best practices, Cloud platform (AWS, GCP, or Azure), Infrastructure as Code tools (Terraform, CloudFormation), Relevant security certifications (CCSP, CSSK, AWS Security Specialty), Multi-cloud environments and cloud-to-cloud security, DevSecOps practices and tools, Kubernetes and container security, Building custom cloud security tools or integrations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":340000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e2392ba0-1bc"},"title":"Staff Engineer AI Agents","description":"<p>About Zuma</p>\n<p>Zuma is pioneering the future of agentic AI in property management. We build AI agents that act as property managers, handling the full spectrum of interactions with both prospects and current residents on behalf of our clients.</p>\n<p>Our agents don’t just assist human workflows; they own them end-to-end, operating across leasing, collections and resident communications. Zuma has ambitions to continue expanding into adjacent work activities in tangential areas of property management.</p>\n<p>This is a rare chance to shape the future of how an entire industry operates , not in theory, but in production, at scale, touching real customers and physical assets every day. At Zuma, human and AI agents work side by side, and you&#39;ll help define what that collaboration looks like at its best.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own E2E projects that cross all areas of software development including full stack web apps, agentic AI solutions across multiple work activities, extensive integrations with PMS and CRM systems, infrastructure, and internal tooling.</li>\n</ul>\n<ul>\n<li>Architect, build, and deploy production AI agents using modern agent frameworks, owning the full lifecycle from design to reliability in production.</li>\n</ul>\n<ul>\n<li>Define the technical patterns and standards for how software is built across the engineering org , you will be setting the playbook others follow.</li>\n</ul>\n<ul>\n<li>Strengthen our core systems , including our onboarding/configuration system, integration frameworks, and AI performance analytics infrastructure.</li>\n</ul>\n<ul>\n<li>Collaborate directly with the VPE and product leadership to translate product vision into delivery, making high-stakes technical trade-offs with confidence.</li>\n</ul>\n<ul>\n<li>Own system reliability, observability, and continuous improvement , defining how we measure, monitor, and iterate on our agents and web products in production.</li>\n</ul>\n<ul>\n<li>Work across the stack (backend services, LLM orchestration, integrations, data pipelines, frontends) to ship agents and products that are robust and scalable.</li>\n</ul>\n<ul>\n<li>Tame legacy code and lay down new foundations , patterns and architecture you create will be inherited by the engineers who come after you.</li>\n</ul>\n<ul>\n<li>Be a close partner to the product and operations teams, turning their domain needs into intelligent automated workflows without requiring domain expertise upfront.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience at a startup or high-growth company; comfort shipping fast and iterating in production.</li>\n</ul>\n<ul>\n<li>AWS experience with IaC (Terraform) and comfort working with infrastructure / dev ops.</li>\n</ul>\n<ul>\n<li>Background in building self-serve platforms or integration infrastructure.</li>\n</ul>\n<ul>\n<li>Experience with workflow automation platforms or business process orchestration.</li>\n</ul>\n<ul>\n<li>Experience with telephony integrations (Twilio or similar) and building voice-capable agents or chatbots across text and voice channels.</li>\n</ul>\n<ul>\n<li>Familiarity with speech-to-text, text-to-speech, or real-time audio streaming pipelines in production AI systems.</li>\n</ul>\n<ul>\n<li>Classical ML experience , supervised/unsupervised learning, feature engineering, model training and evaluation outside of LLM contexts.</li>\n</ul>\n<p><strong>Our Stack</strong></p>\n<ul>\n<li>Python, TypeScript/Node.js</li>\n</ul>\n<ul>\n<li>OpenAI, Anthropic</li>\n</ul>\n<ul>\n<li>LangGraph, OpenAI Agents SDK, custom orchestration layers</li>\n</ul>\n<ul>\n<li>AWS, AWS ECS, PostgreSQL, Redis</li>\n</ul>\n<ul>\n<li>RealPage, Entrata, Yardi, and other property management systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e2392ba0-1bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zuma","sameAs":"https://www.zuma.com/","logo":"https://logos.yubhub.co/zuma.com.png"},"x-apply-url":"https://jobs.lever.co/getzuma/16961f6d-ab02-469d-8f99-3a68bf5a5026","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180-220 per year","x-skills-required":["Python","TypeScript","OpenAI","Anthropic","LangGraph","OpenAI Agents SDK","AWS","AWS ECS","PostgreSQL","Redis","RealPage","Entrata","Yardi"],"x-skills-preferred":["AWS IaC (Terraform)","Infrastructure / Dev Ops","Self-serve platforms","Integration infrastructure","Workflow automation platforms","Business process orchestration","Telephony integrations (Twilio)","Voice-capable agents or chatbots","Speech-to-text","Text-to-speech","Real-time audio streaming pipelines","Classical ML"],"datePosted":"2026-04-17T13:12:33.765Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, OpenAI, Anthropic, LangGraph, OpenAI Agents SDK, AWS, AWS ECS, PostgreSQL, Redis, RealPage, Entrata, Yardi, AWS IaC (Terraform), Infrastructure / Dev Ops, Self-serve platforms, Integration infrastructure, Workflow automation platforms, Business process orchestration, Telephony integrations (Twilio), Voice-capable agents or chatbots, Speech-to-text, Text-to-speech, Real-time audio streaming pipelines, Classical ML"}]}