{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/cloud-environment"},"x-facet":{"type":"skill","slug":"cloud-environment","display":"Cloud Environment","count":73},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fb257514-ae0"},"title":"Architect for Scalable AI Solutions","description":"<p>Are you enthusiastic about innovative technologies and Generative AI? Do you want to design architectures and make KI solutions productive, build scalable systems, and support customers in integrating modern AI? Then join our team and shape the future of KI-supported architectures, applications, and workflows with us.</p>\n<p>Your tasks will include:</p>\n<ul>\n<li>Designing scalable KI architectures: developing high-performance architectures and integrating ML and GenAI models into customer environments (e.g., SAP, CRM, Microservices)</li>\n<li>Implementing pipelines and workflows: building scalable data and AI architectures, integrating them into existing pipelines, and developing XOps solutions</li>\n<li>Backend services and system integration: developing high-performance services to integrate models into productive workflows and ensuring smooth transitions between training, deployment, and application</li>\n<li>Deployment, monitoring, and optimization: implementing prototypes and MVPs in cloud environments, optimizing performance, and ensuring scalability and security</li>\n<li>Identifying use cases: analyzing business processes, recognizing potential for GenAI, and deriving technical solutions</li>\n<li>Project and stakeholder management: moderating workshops, closely coordinating with interdisciplinary teams, international project partners, and customers</li>\n</ul>\n<p>To be well-prepared for your path, you should have the following qualifications:</p>\n<ul>\n<li>Completed studies in computer science, software engineering, data science, or a comparable field with at least 4 years of professional experience, ideally in consulting and (Gen)AI</li>\n<li>Passion for AI and Generative AI, scalable systems, cloud technologies, and building high-performance AI infrastructure</li>\n<li>Expertise in Python, ML, LLMs, RAG, cloud environments (Azure, AWS, GCP), Docker, Kubernetes, REST APIs, CI/CD</li>\n<li>Knowledge in software architecture, cloud-native design, MLOps, and AI security</li>\n<li>Your work style is characterized by self-responsibility, goal orientation, teamwork, and hands-on mentality</li>\n</ul>\n<p>Before departure:</p>\n<ul>\n<li>Start date: after agreement - always at the beginning of a month</li>\n<li>Working hours: full-time (40 hours) and/or part-time possible; 30 vacation days</li>\n<li>Employment relationship: unlimited</li>\n<li>Field: consulting</li>\n<li>Language: secure German and English</li>\n<li>Flexibility and travel readiness</li>\n<li>Other: valid work permit; if necessary, we can apply for a work permit within our recruitment process. The procedure takes time and affects the start date</li>\n</ul>\n<p>At MHP, you grow continuously in an innovative and supportive environment. This makes us the perfect sparring partner for your career. For both professional input and networking. We offer you:</p>\n<ul>\n<li>Appreciation. We support and appreciate colleagues as they are and celebrate our successes together</li>\n<li>We are always happy about creativity and new impulses</li>\n<li>Flexibility. Time-wise and location-wise - according to the project at home, in the office, or at the customer</li>\n<li>You have the opportunity to grow with us in tasks, knowledge, and responsibility</li>\n</ul>\n<p>To apply, please submit your application as soon as possible. Simply online through our Job Locator. There, you can send your application documents, such as resume, certificates, and possibly project lists, in just a few clicks to us. A cover letter is not required.</p>\n<p>By the way: If your application reaches us, our recruiting team checks across departments whether there is a suitable position for you. Irrespective of current job postings, we try to find the right job for you at MHP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fb257514-ae0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"https://www.mhp.com","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18795","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"unspecified","x-skills-required":["Python","ML","LLMs","RAG","cloud environments","Docker","Kubernetes","REST APIs","CI/CD","software architecture","cloud-native design","MLOps","AI security"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:26:52.405Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML, LLMs, RAG, cloud environments, Docker, Kubernetes, REST APIs, CI/CD, software architecture, cloud-native design, MLOps, AI security"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a6c6e1c7-2a8"},"title":"Assistant Manager, SOX IT Lead","description":"<p>As the Assistant Manager, SOX IT Lead, you will lead the design, implementation, monitoring, and testing of IT General Controls (ITGC) and IT Application Controls (ITAC) under SOX compliance for American Honda Finance Corporation. This role ensures robust governance and risk management practices to mitigate risks and support the overall reliability of financial reporting by serving as the primary SME for complex IT control environments, system architectures, and emerging technologies impacting AHFC&#39;s SOX compliance.</p>\n<p>Key responsibilities will include:</p>\n<ul>\n<li>Leading the planning, execution, and monitoring of ITGC and ITAC for annual SOX compliance activities.</li>\n<li>Acting as the primary liaison between AHM IT GRC, CT IT, internal auditors, and external auditors for ITGC and ITAC Testing.</li>\n<li>Maintaining Risk Control Matrices (RCMS), data flow diagrams, and control documentation.</li>\n<li>Collaborating on technology projects to ensure SOX compliance requirements are integrated.</li>\n<li>Providing guidance and training to CH IT and AHFC Management on SOX requirements and control expectations.</li>\n</ul>\n<p>&#39;\\n To be successful in this role, you will need:</p>\n<ul>\n<li>A minimum of 8-10 years of experience in IT Audit, IT compliance, or IT risk management.</li>\n<li>Strong understanding of SOX, ITGCs, and frameworks such as COBIT, COSO, NIST.</li>\n<li>Experience working with ERP Systems.</li>\n<li>Experience in a public company or Big 4 audit environment.</li>\n<li>Experience as a technical SME for IT controls.</li>\n</ul>\n<p>&#39;\\n In addition to the above requirements, you will also need to possess excellent communication and stakeholder management skills, as well as the ability to interpret technical concepts and translate them into control requirements.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a6c6e1c7-2a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"American Honda Finance Corporation","sameAs":"https://careers.honda.com","logo":"https://logos.yubhub.co/careers.honda.com.png"},"x-apply-url":"https://careers.honda.com/us/en/job/10377/Asst-Manager-SOX-IT-Lead","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$94,900.00 - $142,400.00","x-skills-required":["SOX","ITGC","ITAC","COBIT","COSO","NIST","ERP Systems","public company","Big 4 audit environment","technical SME"],"x-skills-preferred":["cloud environments","AWS","Azure","logical access","change","backup","incident management","application controls"],"datePosted":"2026-04-22T17:24:09.349Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Torrance"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SOX, ITGC, ITAC, COBIT, COSO, NIST, ERP Systems, public company, Big 4 audit environment, technical SME, cloud environments, AWS, Azure, logical access, change, backup, incident management, application controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":94900,"maxValue":142400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6e7c226-e8c"},"title":"Technical Lead, MFT MDE Analytics Engineering","description":"<p>The SPEED Market Data team at Equity IT is seeking a hands-on Technical Lead to own and drive a critical workstream focused on architecting, implementing, monitoring, and supporting low-latency C++ systems. As a Technical Lead, you will shape the future of the industry by working alongside exceptional engineers and strategists to solve significant engineering problems.</p>\n<p>We are looking for a strong technical leader with financial markets technology experience and real-time market data expertise to design, build, and support our global real-time market data platform. This role emphasizes technical leadership, architectural ownership, and cross-team coordination rather than people management.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Act as the technical owner for a major market data workstream, setting technical direction, defining architecture, and driving execution across the full lifecycle.</li>\n<li>Collaborate with hardware and software teams across divisions to design and build real-time market data processing and distribution systems.</li>\n<li>Lead and drive new technical initiatives for the team, including evaluating technologies, defining standards, and establishing best practices.</li>\n<li>Design and develop systems, interfaces, and tools for historical market data and trading simulations that increase research productivity.</li>\n<li>Architect and implement components of an enterprise market data platform, including components for caching, aggregation, conflation and value-added data enrichment.</li>\n<li>Optimise platform performance using network and systems programming, and advanced low-latency techniques (CPU, NIC, kernel, and application-level tuning).</li>\n<li>Lead the design and maintenance of automated test and benchmark frameworks, and tools for risk management, performance tracking, and system validation.</li>\n<li>Provide technical leadership for the support and operation of both enterprise real-time market data environments, including coordinating internal, vendor, and exchange-driven changes.</li>\n<li>Design and engineer components to automate support and management of the market data platform, including monitoring, real-time and historical metrics collection/visualisation, and self-service administrative/user tools.</li>\n<li>Serve as a primary technical liaison for users of the market data environment (Portfolio Managers, trading desks, and core technology teams), translating requirements into robust technical solutions.</li>\n<li>Lead the enhancement of processes and workflows for operating the market data platform (release/deployment, incident management and remediation, exchange notification handling, defining and enforcing SLAs).</li>\n<li>Mentor and influence other engineers through code reviews, design reviews, and hands-on guidance, fostering a culture of technical excellence and accountability.</li>\n</ul>\n<p>Qualifications / Skills Required:</p>\n<ul>\n<li>Degree in Computer Science or a related field with a strong background in data structures, algorithms, and object-oriented programming in modern C++.</li>\n<li>Deep understanding of Linux system internals and networking, especially in low-latency and high-throughput environments.</li>\n<li>Strong knowledge of CPU architecture and the ability to leverage CPU capabilities for performance optimisation.</li>\n<li>Demonstrated experience acting as a technical lead or senior engineer owning complex systems or workstreams end-to-end (design, delivery, and operations).</li>\n<li>Able to prioritise and make trade-offs in a fast-moving, high-pressure, constantly changing environment; strong sense of urgency, ownership, and follow-through.</li>\n<li>Strong belief in and practice of extreme ownership, with a track record of taking accountability for systems in production.</li>\n<li>Effective communication and stakeholder management skills: able to work closely with business and technology users, understand their needs, and drive appropriate technical solutions.</li>\n<li>Experience building solutions on cloud environments such as GCP and AWS.</li>\n<li>Knowledge of additional programming languages such as Java, Python, or scripting (Perl, shell).</li>\n<li>Technical background in application development on complex market data systems (e.g., Bloomberg, Thomson Reuters, etc.).</li>\n<li>Experience supporting market data environments within a global organisation, including internally developed DMA feed handlers and distribution infrastructure.</li>\n<li>Strong understanding of market data concepts and functionality, including data models (fields/messages), protocols (e.g., snapshot + delta), order book representations (L1/L2/L3), recovery, and reliability.</li>\n<li>Hands-on Site Reliability Engineering or DevOps experience, including system administration, automation, measurement, and release/deployment management.</li>\n<li>Experience with monitoring, metrics, and command/control tooling for distributed market data platforms, with the ability to evaluate existing solutions and drive enhancements across development and operations.</li>\n<li>Ability to operate with a high level of thoroughness and attention to detail, demonstrating strong ownership of deliverables and production systems.</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6e7c226-e8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954905529","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["C++","Linux system internals","Networking","CPU architecture","Object-oriented programming","Cloud environments","Java","Python","Scripting","Market data systems","Site Reliability Engineering","DevOps","Monitoring","Metrics","Command/control tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:18.645Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, Linux system internals, Networking, CPU architecture, Object-oriented programming, Cloud environments, Java, Python, Scripting, Market data systems, Site Reliability Engineering, DevOps, Monitoring, Metrics, Command/control tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7e58f60-5fa"},"title":"Software Engineer - Learning Engineering and Data (LEaD) Program","description":"<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>\n<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>\n<p>Candidate is expected to:</p>\n<ul>\n<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>\n<li>Take part in the development and enhancement of the backend distributed system</li>\n<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>2-5 years of experience working with C++, Python, or Java</li>\n<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>\n<li>Must be comfortable working in both Unix/Linux and Windows environments</li>\n<li>Good understanding of various design patterns</li>\n<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>\n<li>Solid communication skills</li>\n<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>\n<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n</ul>\n<p>Desirable Skills/Knowledge:</p>\n<ul>\n<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>\n<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>\n<li>Hands-on experience building ML and data pipeline architectures</li>\n<li>Understanding of distributed messaging systems</li>\n<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>\n<li>Experience with relational and non-relational database platforms</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7e58f60-5fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT LEad Program","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953879362","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","Python","Java","ML libraries","Pandas","NumPy","FastAPI","Boost","Spring Boot"],"x-skills-preferred":["Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field","Demonstrable passion for developing LLM-powered products","Hands-on experience building ML and data pipeline architectures","Understanding of distributed messaging systems","Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)","Experience with relational and non-relational database platforms"],"datePosted":"2026-04-18T22:13:11.242Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master's degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21f5f6c3-734"},"title":"Data Engineer","description":"<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>\n<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>\n<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>\n<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>\n<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>\n<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>\n<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>\n<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>\n<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>\n<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>\n<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>\n<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21f5f6c3-734","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-engineer","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["Data Engineering","Cloud environments","dbt","Airbyte/Fivetran","BigQuery","GCP ecosystem","Infrastructure-as-Code","Terraform","Airflow","Dagster","Python","SQL","CI/CD best practices","DevOps practices"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:06.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6ddce508-2c7"},"title":"ML Systems Engineer, Robotics","description":"<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>\n<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>\n<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>\n<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>\n<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>\n<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>\n<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>\n<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>\n<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>\n<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>\n<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>\n<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>\n</ul>\n<p>Nice to Haves:</p>\n<ul>\n<li>Exposure to Vision-Language-Action (VLA) models.</li>\n<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>\n<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6ddce508-2c7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4663053005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,200-$284,000 USD","x-skills-required":["Machine Learning","Backend Systems","Cloud Environments","GPU-Level Algorithm Optimizations","Systems-Level Languages","Containerization","Orchestration","Cloud Providers","Infrastructure as Code"],"x-skills-preferred":["Vision-Language-Action Models","High-Performance Video Processing","3D Data Handling","Robotics Middleware","AV Data Formats"],"datePosted":"2026-04-18T15:59:25.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227200,"maxValue":284000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e355a4a3-c92"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modelling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p><strong>Automation &amp; Tooling</strong></p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p><strong>Operations &amp; Incident Response</strong></p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p><strong>Cross-Functional Collaboration</strong></p>\n<ul>\n<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p><strong>Preferred/Bonus Qualifications</strong></p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e355a4a3-c92","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437947","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Networking","Automation","Cloud Environments","Monitoring Tools"],"x-skills-preferred":["PgBouncer","HAProxy","Event Streaming","Change Data Capture"],"datePosted":"2026-04-18T15:57:53.990Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_588dfb0e-611"},"title":"Solutions Architect - Kubernetes","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>\n<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>\n<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>\n<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>\n<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>\n<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>\n<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>\n<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>\n<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>\n<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>\n<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>\n<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>\n<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_588dfb0e-611","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4557835006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $220,000","x-skills-required":["Kubernetes","Cloud Computing","High-Performance Compute (HPC)","Distributed Systems","Cloud Infrastructure","Scalable Solutions","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)","Slurm","Kubernetes Clusters"],"x-skills-preferred":["Code Contributions to Open-Source Inference Frameworks","Scripting and Automation Related to Kubernetes Clusters and Workloads","Building Solutions Across Multi-Cloud Environments","Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures"],"datePosted":"2026-04-18T15:57:29.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f4cd384f-6ed"},"title":"Senior Software Engineer, Release Engineering","description":"<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>\n<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>\n<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>\n<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>\n<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>\n<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f4cd384f-6ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Temporal","sameAs":"https://temporal.io/","logo":"https://logos.yubhub.co/temporal.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$176,000 - $237,600","x-skills-required":["Go","Java","Concurrency","Distributed Systems","Multi-threaded Programming","Backend Systems","Tooling","Infrastructure","Developer Workflows","Release Automation","CI/CD Pipelines","Build Tools","Deployment Orchestration","Cloud Environments","Container Tooling","Distributed Systems Orchestration","Observability Tooling","Platform Engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:07.513Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States - Remote Opportunity"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176000,"maxValue":237600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9cb24149-c62"},"title":"Principal Software Engineer, Productivity","description":"<p>We are looking for a Principal-level engineer who is passionate about building and evolving the developer productivity ecosystem used by the entire Workflows Engineering organisation.</p>\n<p>As a productivity engineer, you&#39;ll work with both our Engineering and Site Reliability teams, owning our developer CLI (Golang) and Kubernetes tooling, automated release processes, and CI/CD systems in CircleCI.</p>\n<p>Job Duties and Responsibilities:</p>\n<ul>\n<li>Collaborate with the SRE and Engineering teams to manage, extend, and enhance existing developer productivity and platform tooling for local and remote Kubernetes environments</li>\n<li>Own and optimise CI/CD pipelines in CircleCI</li>\n<li>Assist in weekly release orchestration</li>\n<li>Automate and improve processes via Golang tooling and Okta Workflows</li>\n</ul>\n<p>Minimum Required Knowledge, Skills, and Abilities:</p>\n<ul>\n<li>10+ years of deep understanding of software engineering processes, agile framework, tools (e.g.: programming proficiency in a language, preferably Go or similar compiled language), methods, test development, algorithms, and data structures</li>\n<li>Experience with Cloud Native Technologies (Kubernetes, ArgoCD, Crossplane, Docker)</li>\n<li>Passionate about learning new technical ecosystems</li>\n<li>Interested in working with container deployment and orchestration technologies at scale, with familiarity of the fundamentals to include service discovery, deployments, monitoring, scheduling, and load balancing</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with CI/CD Systems (such as CircleCI or Github Actions)</li>\n<li>Experience with development and deployment in a hosted cloud environment, preferably AWS</li>\n</ul>\n<p>Education and Training:</p>\n<p>BS, MS, or PhD in Computer Science or related field</p>\n<p>The annual base salary range for this position for candidates located in Canada is between $177,000-$265,000 CAD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9cb24149-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7361555","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$177,000-$265,000 CAD","x-skills-required":["software engineering processes","agile framework","Go","Kubernetes","ArgoCD","Crossplane","Docker"],"x-skills-preferred":["CI/CD Systems","development and deployment in a hosted cloud environment"],"datePosted":"2026-04-18T15:56:47.868Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering processes, agile framework, Go, Kubernetes, ArgoCD, Crossplane, Docker, CI/CD Systems, development and deployment in a hosted cloud environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":177000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d799d883-0dd"},"title":"Solutions Architect- Networking","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in leading innovation at every turn. You will have the opportunity to demonstrate thought leadership and engage hands-on throughout our customers&#39; entire lifecycle. From establishing their Kubernetes environment to developing proofs of concept, onboarding, and optimizing workloads, you will lead innovation at every turn.</p>\n<p>In this role, you will:</p>\n<p>Serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on networking technologies within high-performance compute (HPC) environments Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments. Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Networking product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions. Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions. Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption. Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>Who You Are:</p>\n<p>B.S. in Computer Science or a related technical discipline, or equivalent experience 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure focusing on building distributed systems or HPC/cloud services, with an expertise focused on infrastructure networking. Fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions Proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences Expertise with a broad range of networking technologies and topics, with a familiarity to understand the needs and use cases is it relates to securing and enabling high performance networking environments. Experience with managing infrastructure networking, Kubernnetes CSI management, and private networking concepts Familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL)</p>\n<p>Preferred:</p>\n<p>Code contributions to open-source inference frameworks Experience with scripting and automation related to network technologies Experience with building solutions across multi-cloud environments Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d799d883-0dd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4568528006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $220,000","x-skills-required":["cloud computing","Kubernetes","infrastructure networking","high-performance computing","networking technologies","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)"],"x-skills-preferred":["open-source inference frameworks","scripting and automation","multi-cloud environments","latency, optimization, or advanced model-server architectures"],"datePosted":"2026-04-18T15:56:27.053Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, Kubernetes, infrastructure networking, high-performance computing, networking technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), open-source inference frameworks, scripting and automation, multi-cloud environments, latency, optimization, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cef9a3ff-75c"},"title":"Technical Program Manager, Platform","description":"<p>As a Technical Program Manager for Platform, you&#39;ll own the programs that stand up and operate Anthropic&#39;s APIs and serving infrastructure across multiple cloud environments.</p>\n<p>This means driving deployments from scoping through production, running the platform work that spans them, and working across API, Platform Foundations, Security, our cloud provider counterparts, and whoever else is on the critical path when dependencies and tradeoffs pile up.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own end-to-end program execution for Anthropic’s API across major cloud deployments, from scoping through production launch and steady-state operations</li>\n</ul>\n<ul>\n<li>Drive the platform programs that cut across individual deployments: the shared foundations that get built once and reused, not rebuilt per cloud</li>\n</ul>\n<ul>\n<li>Act as a primary coordination point with cloud provider counterparts, keeping engagement clean across multiple internal teams with touchpoints into the same partner</li>\n</ul>\n<ul>\n<li>Partner with engineering leadership to turn technical direction into executable plans with clear owners, dependencies, and risk tracking</li>\n</ul>\n<ul>\n<li>Build the program scaffolding (roadmaps, status reporting, decision logs, escalation paths) that lets a fast-moving org stay aligned without slowing down</li>\n</ul>\n<ul>\n<li>Drive the hard sequencing conversations when partner commitments, engineering bandwidth, and priorities are in tension, and surface them to leadership with a recommendation</li>\n</ul>\n<ul>\n<li>Identify where program coverage is thin relative to the load and help shape how we staff around it</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of technical program management experience, including ownership of large infrastructure or platform programs with many engineering teams and external partners in the mix</li>\n</ul>\n<ul>\n<li>Have deep technical fluency in cloud APIs, infrastructure, distributed systems, or platform engineering, enough to be a credible partner to senior engineers on architecture and sequencing, not just a tracker of their decisions</li>\n</ul>\n<ul>\n<li>Have run programs spanning organizational boundaries where you had no direct authority over most of the people whose work you depended on, and delivered anyway</li>\n</ul>\n<ul>\n<li>Have direct experience with multi-cloud or hybrid cloud environments, large-scale migrations, or building platform abstraction layers</li>\n</ul>\n<ul>\n<li>Have worked with major cloud providers (AWS, GCP, Azure) or similar large technology partners, and know how to keep those relationships productive when priorities diverge</li>\n</ul>\n<ul>\n<li>Are comfortable operating in ambiguity on the long arc while being ruthlessly concrete on what ships this quarter and who owns it</li>\n</ul>\n<ul>\n<li>Have a track record of making a program get cheaper to run the second and third time, not just landing the first instance</li>\n</ul>\n<ul>\n<li>Thrive in environments where the plan you wrote last month needs rewriting, without losing the thread on what matters</li>\n</ul>\n<p>Strong candidates may also:</p>\n<ul>\n<li>Have experience with production serving infrastructure, inference systems, or ML platform work</li>\n</ul>\n<ul>\n<li>Have moved between senior IC and management roles, or have interest in doing so</li>\n</ul>\n<ul>\n<li>Have worked at a company rebuilding systems and org in flight during rapid scale-up</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cef9a3ff-75c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5157003008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$435,000 USD","x-skills-required":["Cloud APIs","Infrastructure","Distributed Systems","Platform Engineering","Program Management","Cloud Providers","Multi-Cloud Environments","Hybrid Cloud Environments","Large-Scale Migrations","Platform Abstraction Layers"],"x-skills-preferred":["Production Serving Infrastructure","Inference Systems","ML Platform Work","Senior IC and Management Roles","Rapid Scale-Up"],"datePosted":"2026-04-18T15:55:49.869Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud APIs, Infrastructure, Distributed Systems, Platform Engineering, Program Management, Cloud Providers, Multi-Cloud Environments, Hybrid Cloud Environments, Large-Scale Migrations, Platform Abstraction Layers, Production Serving Infrastructure, Inference Systems, ML Platform Work, Senior IC and Management Roles, Rapid Scale-Up","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":435000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_611720bf-294"},"title":"Senior Application Security Engineer","description":"<p>Why join us</p>\n<p>Brex is a financial platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>\n<p>As a Senior Application Security Engineer, you will focus on finding and responding to security vulnerabilities across the Brex platform. In this role, you will perform code reviews, design reviews, penetration testing, and vulnerability management. You will develop and maintain tooling to perform static and dynamic testing of the Brex platform and tooling which supports secure developer workflows.</p>\n<p>Application Security is part of our wider Financial Scale organization, which means you will work closely with Security Operations, GRC, Product Security, Front End Platform, IT Infrastructure teams.</p>\n<p>We’re looking for individuals with a strong background and interest in penetration testing. You should have a demonstrated ability to find vulnerabilities in complex systems and craft exploits to demonstrate business impact.</p>\n<p>This role is highly cross-functional and collaborative, you will have the opportunity to work with every engineering team across Brex.</p>\n<p>Building a world-class financial service requires world-class security. Brex is pioneering the next wave of AI-driven financial services for dynamic, high-impact companies like Coinbase, Robinhood, and Anthropic.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Identifying vulnerabilities, demonstrating business impact, and articulating the risk of specific vulnerabilities to drive prioritization efforts</li>\n</ul>\n<ul>\n<li>Perform penetration testing and design reviews, looking for vulnerabilities and insecure designs, work with engineering and product to design secure product features</li>\n</ul>\n<ul>\n<li>Maintain and build internal tools to automate security efforts, perform SAST and DAST testing of the Brex platform, and support secure development practices</li>\n</ul>\n<ul>\n<li>Build and contribute to a culture of collaborative security excellence through technical leadership, learning sessions, and mentorship within the team and wider organization</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years work experience in an Application Security or related role</li>\n</ul>\n<ul>\n<li>Ability to find vulnerabilities in complex systems, demonstrating business impact through custom attack chains</li>\n</ul>\n<ul>\n<li>Experience with a wide range of secure development activities including, threat modeling, developer education, and incident response</li>\n</ul>\n<ul>\n<li>Knowledge of Python, scripting languages, and AI/agentic workflows to automate tasks, build tools and improve productivity</li>\n</ul>\n<ul>\n<li>Collaborative mindset paired with strong written and verbal communication skills</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Proficiency with Kotlin, gRPC, GraphQL, Kubernetes</li>\n</ul>\n<ul>\n<li>Previous experience as a software engineer</li>\n</ul>\n<ul>\n<li>Consultancy experience performing web application security reviews</li>\n</ul>\n<ul>\n<li>Experience with securing distributed systems in AWS and cloud environments</li>\n</ul>\n<ul>\n<li>Experience with pentesting and securing agentic features and systems</li>\n</ul>\n<ul>\n<li>Contributions to the wider technical community, open source, public research, mentorship, community organizing, blogging, CVEs, presentations, etc</li>\n</ul>\n<p>Experience submitting to bug bounty programs or responsible disclosure programs</p>\n<p>Compensation</p>\n<p>The expected salary range for this role is $192,000 - $240,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_611720bf-294","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8249884002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 - $240,000","x-skills-required":["Python","Secure development activities","Threat modeling","Developer education","Incident response","AI/agentic workflows","Collaborative mindset","Strong written and verbal communication skills"],"x-skills-preferred":["Kotlin","gRPC","GraphQL","Kubernetes","Software engineering","Web application security reviews","Distributed systems in AWS and cloud environments","Pentesting and securing agentic features and systems","Contributions to the wider technical community"],"datePosted":"2026-04-18T15:55:36.756Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Secure development activities, Threat modeling, Developer education, Incident response, AI/agentic workflows, Collaborative mindset, Strong written and verbal communication skills, Kotlin, gRPC, GraphQL, Kubernetes, Software engineering, Web application security reviews, Distributed systems in AWS and cloud environments, Pentesting and securing agentic features and systems, Contributions to the wider technical community","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0e8e8a8f-db0"},"title":"Staff Software Engineer - Node.js  (JavaScript or TypeScript)","description":"<p><strong>Job Title\\nStaff Software Engineer - Node.js (JavaScript or TypeScript)\\n\\n## Company Overview\\nOkta is a developer-friendly identity platform that simplifies authentication and authorization for applications.\\n\\n## Role Description\\nWe are hiring for a new team within Core Identity, the Engineering organization entrusted with the very heart of the Auth0 application. Our teams own the authentication pipeline, identity protocols, user sessions, and all the fundamental concepts and foundational elements that underpin our entire product.\\n\\nAs a Staff Engineer for this new team, named Core Frontier, you will lead at the vital intersection of deep product innovation and the global customer experience. Your mission is to ensure that the sophisticated features developed across the Core Identity organization (such as Native to Web, Cross-App Access and Custom Token Exchange) are hardened, scaled, and seamlessly integrated into the Auth0 ecosystem.\\n\\n## Responsibilities\\n<em> Be a founding Staff member of this new team in Bengaluru, setting the technical bar and engineering culture for our growing presence in the region and working in collaboration with our global teams in Europe and North America.\\n</em> Lead the design and delivery of innovative features that extend the capabilities of Auth0’s platform to help our customers innovate around the world securely and delightfully.\\n<em> Take ownership of the strategic technical quality, security, reliability, and scalability of our systems. You&#39;ll drive architectural improvements and advocate for engineering best practices.\\n</em> Identify architectural gaps in our current &quot;frontier&quot; and advocate for long-term improvements that benefit the entire Core Identity organization.\\n<em> Thrive in a highly collaborative and cross-functional environment, working with talented engineers and partners across Product, Security, Design, Architecture and QA to deliver features that delight our customers and ensure a unified technical vision.\\n</em> Deepen or gain expertise in identity, security, and modern cloud technologies (AWS, Azure) while working on distributed systems at scale.\\n<em> Mentor other engineers and contribute to our culture of technical excellence and continuous improvement.\\n</em> Participate in an on-call rotation to ensure our critical services remain healthy and reliable.\\n\\n## Requirements\\n<em> 8+ years of professional software development experience, or equivalent.\\n</em> Proficiency in designing and building services with Node.js (JavaScript or TypeScript).\\n<em> Experience creating and maintaining public and secure APIs, as well as front ends.\\n</em> Experience designing, building, and operating distributed systems in a cloud environment (e.g., AWS, Azure).\\n<em> A strong commitment to quality, with experience in various testing strategies (e.g., unit, integration, end-to-end).\\n</em> A proven track record of driving technical alignment across multiple teams and mentoring senior-level individual contributors.\\n<em> A product-oriented mindset, with the ability to understand customer needs and work collaboratively to find effective solutions.\\n\\n## Nice to Have\\n</em> Experience in the Identity and Access Management (IAM) domain.\\n<em> Knowledge of security engineering principles and application security best practices.\\n\\n## What You Can Look Forward To\\n</em> Amazing Benefits\\n<em> Making Social Impact\\n</em> Developing Talent and Fostering Connection + Community at Okta\\n\\n## Okta Experience\\nOkta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0e8e8a8f-db0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7602354","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Node.js","JavaScript","TypeScript","APIs","Frontends","Distributed Systems","Cloud Environment","AWS","Azure","Security","Identity and Access Management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:28.361Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Node.js, JavaScript, TypeScript, APIs, Frontends, Distributed Systems, Cloud Environment, AWS, Azure, Security, Identity and Access Management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c9a6540-bc6"},"title":"Senior Security Operations Engineer","description":"<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. As a Senior Security Operations Engineer, you will focus on preventing, detecting and responding to security threats across Brex&#39;s corporate and cloud environments. You will use existing systems and develop tools to improve our security capabilities.</p>\n<p>Our team is responsible for functions across corporate security, detection &amp; response and infrastructure security domains; and we perform systems engineering and automation to support those functions. Security Operations is part of our wider Trust &amp; IT organization which means you will have the opportunity to work closely with Application Security, Corporate Engineering, GRC and IT and to improve security configurations, drive positive employee behaviors and generally work to prevent events from becoming incidents.</p>\n<p>You will also help build and maintain our team’s open source project Substation and have the opportunity to contribute to the Brex Tech Blog. You’ll be part of a team that actively contributes to the wider security community and has a commitment to mentorship and engineering excellence.</p>\n<p>We’re looking for individuals with a strong background and interest in detecting, responding to, and resolving security incidents and security challenges. You should be comfortable dealing with lots of moving pieces, changing priorities, and new technologies, while having a keen eye for detail. Most importantly, you should be enthusiastic about working with a variety of backgrounds, roles, and people across Brex.</p>\n<p>Building a world-class financial service requires world-class security.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c9a6540-bc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8339287002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 CAD - $240,000 CAD","x-skills-required":["CI/CD systems","DevOps workflows","Cloud environments","Security services and tools","Go and Python programming"],"x-skills-preferred":["Go","Securing distributed systems in AWS, cloud and Kubernetes environments"],"datePosted":"2026-04-18T15:53:26.384Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, British Columbia, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"CI/CD systems, DevOps workflows, Cloud environments, Security services and tools, Go and Python programming, Go, Securing distributed systems in AWS, cloud and Kubernetes environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c7dc0cb-87c"},"title":"Solutions Architect - Storage","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers succeed with our cloud infrastructure offerings. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on storage technologies within high-performance compute (HPC) environments.</p>\n<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Storage product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c7dc0cb-87c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4568531006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $220,000","x-skills-required":["cloud computing concepts","architecture","technologies","storage solutions","Kubernetes","cloud infrastructure","high-performance compute (HPC)","storage technologies","file system protocols","infrastructure systems"],"x-skills-preferred":["code contributions to open-source inference frameworks","scripting and automation related to storage technologies","building solutions across multi-cloud environments","client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures"],"datePosted":"2026-04-18T15:52:39.508Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing concepts, architecture, technologies, storage solutions, Kubernetes, cloud infrastructure, high-performance compute (HPC), storage technologies, file system protocols, infrastructure systems, code contributions to open-source inference frameworks, scripting and automation related to storage technologies, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a7d182d-c49"},"title":"Solutions Architect - Kubernetes","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>\n<p>Your primary responsibility will be to serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>\n<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>\n<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>\n<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>\n<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>\n<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>To be successful in this role, you will need to have a proven track record of working as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>\n<p>You will also need to have fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>\n<p>In addition, you will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>\n<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a7d182d-c49","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649036006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $225,000 SGD","x-skills-required":["Cloud computing concepts","Kubernetes solutions","High-performance compute (HPC) environments","Distributed systems","Cloud infrastructure"],"x-skills-preferred":["Code contributions to open-source inference frameworks","Scripting and automation related to Kubernetes clusters and workloads","Building solutions across multi-cloud environments","Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures"],"datePosted":"2026-04-18T15:52:11.835Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing concepts, Kubernetes solutions, High-performance compute (HPC) environments, Distributed systems, Cloud infrastructure, Code contributions to open-source inference frameworks, Scripting and automation related to Kubernetes clusters and workloads, Building solutions across multi-cloud environments, Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_de3eafc7-e74"},"title":"Cloudforce One REACT Principal Consultant","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. We run one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>About the Team</p>\n<p>Cloudforce One is Cloudflare&#39;s threat operations and research team, responsible for identifying and disrupting cyber threats ranging from sophisticated cyber criminal activity to nation-state advanced persistent threats (APTs).</p>\n<p>About the Role</p>\n<p>We are seeking a talented Senior Manager, Incident Response to join us in growing our Cloudforce One organization, where you will be instrumental in building a proactive and threat intelligence-driven approach to protecting Cloudflare and its customers from sophisticated and evolving threat actors.</p>\n<p>Responsibilities</p>\n<p>As a REACT Consultant, you will respond to customer security incidents in on-premises and cloud environments. You will detect and disrupt cyber threat activity across customer networks and cloud environments. You will engage with customers at all levels including Executive, VP, Director, and managerial levels. You will serve an integral role in the discovery and analysis of cyber threat intrusions, working alongside forensic analysts, threat researchers, detection engineers, and malware analysts to detect and mitigate malicious activity.</p>\n<p>The findings you uncover will help identify Tactics, Techniques, and Procedures (TTPs) of ongoing threat activity to protect your customer and the greater Cloudflare customer base.</p>\n<p>Requirements</p>\n<p>Our ideal candidate will have 1-2 years of previous experience in cybersecurity with at least 1+ years in Digital Forensics or Incident Response. Candidates will have experience with hands-on forensic analysis in a Windows, Mac, and Linux environment. Ideally, this candidate will have experience triaging malware using static or dynamic analysis on Windows, macOS, or UNIX-based platforms.</p>\n<p>You will be responsible correlating threat actor activity across the customers environment. Outstanding candidates will possess excellent verbal and written communication skills. You will also have experience with incident response reports and reliably be able to write simple scripts in Python or Golang.</p>\n<p>Examples of desirable skills, knowledge and experience include:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Cybersecurity, related technical field, or equivalent training/practical experience</li>\n<li>3+ years of previous experience in cyber security</li>\n<li>2+ years of Incident Response experience</li>\n<li>1+ years of customer-facing role</li>\n<li>Incident Response: experience conducting or managing incident response investigations for organizations, investigating targeted threats such as the Advanced Persistent Threat, Organized Crime, and Hacktivists</li>\n<li>Computer Forensic Analysis: a background using a variety of forensic analysis tools in incident response investigations to determine the extent and scope of compromise</li>\n<li>Network Forensic Analysis: strong knowledge of network protocols, network analysis tools like Bro/Zeek or Suricata, and ability to perform analysis of associated network logs</li>\n<li>Reverse Engineering: ability to understand the capabilities of static and dynamic malware analysis</li>\n<li>Incident Remediation: strong understanding of targeted attacks and able to create customized tactical and strategic remediation plans for compromised organizations</li>\n<li>Network Operations and Architecture/Engineering: strong understanding of secure network architecture and strong background in performing network operations</li>\n<li>Cloud Incident Response: knowledge in any of the following areas: AWS, Azure, GCP incident response methodologies</li>\n<li>Communications: strong ability to communicate executive and/or detailed level findings to clients; ability to effectively communicate tasks, guidance, and methodology with internal teams</li>\n<li>Strong written and verbal communication skills, with the ability to establish and maintain strong working relationships with business groups</li>\n<li>Technical knowledge of common network protocols and design patterns including TCP/IP, HTTPS, FTP, SFTP, SSH, RDP, CIFS/SMB, NFS</li>\n<li>Familiarity with various cloud environments (AWS, Azure, O365, Google, Cloudflare)</li>\n<li>Understanding of MITRE ATT&amp;CK and NIST Cyber Security Frameworks standards and requirements</li>\n<li>In-depth understanding of Windows operating systems and general knowledge of Unix, Linux, and Mac operating systems</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Proficient in Python or Golang, capable of writing modular code that can be installed on a remote system</li>\n<li>Proficient with Yara and writing rules to detect similar malware samples</li>\n<li>Understanding of source code, hex, binary, regular expression, data correlation, and analysis such as network flow and system logs</li>\n<li>Practical malware analysis experience with static, dynamic, and automated malware analysis techniques</li>\n<li>Possess mid-level experience as a Malware Analyst able to reverse engineer various file formats and analyze complex malware samples</li>\n<li>Reverse engineering experience with APT malware with an understanding of common infection vectors</li>\n<li>Knowledgeable of current malware techniques to evade detection and obstruct analysis</li>\n<li>Experience writing malware reports on unique and interesting aspects of malware</li>\n<li>Experience with malware attribution</li>\n<li>Experience with tracking and identifying threats through Indicator of Compromise (IOCs) pivoting and infrastructure enumeration</li>\n<li>Familiarity with bash command line executables to conduct static analysis and investigate IOCs</li>\n</ul>\n<p>Travel requirements</p>\n<p>Ability to travel up to 20% of the time</p>\n<p>Position may require foreign and domestic travel, passport will be required</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_de3eafc7-e74","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7389902","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Digital Forensics","Incident Response","Cybersecurity","Network Forensic Analysis","Reverse Engineering","Malware Analysis","Cloud Incident Response","Communication","Network Protocols","Cloud Environments","MITRE ATT&CK","NIST Cyber Security Frameworks","Windows Operating Systems","Unix","Linux","Mac Operating Systems"],"x-skills-preferred":["Python","Golang","Yara","Source Code","Hex","Binary","Regular Expression","Data Correlation","Network Flow","System Logs","Static Analysis","Dynamic Analysis","Automated Malware Analysis","Malware Attribution","Indicator of Compromise","Infrastructure Enumeration","Bash Command Line Executables"],"datePosted":"2026-04-18T15:52:02.967Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Digital Forensics, Incident Response, Cybersecurity, Network Forensic Analysis, Reverse Engineering, Malware Analysis, Cloud Incident Response, Communication, Network Protocols, Cloud Environments, MITRE ATT&CK, NIST Cyber Security Frameworks, Windows Operating Systems, Unix, Linux, Mac Operating Systems, Python, Golang, Yara, Source Code, Hex, Binary, Regular Expression, Data Correlation, Network Flow, System Logs, Static Analysis, Dynamic Analysis, Automated Malware Analysis, Malware Attribution, Indicator of Compromise, Infrastructure Enumeration, Bash Command Line Executables"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e0ce11b-ddf"},"title":"Senior Software Engineer - Live Pay","description":"<p>We&#39;re seeking an experienced backend software engineer to join our Live Pay team. As a member of our team, you&#39;ll work cross-functionally with various teams to design and develop key platform services. You&#39;ll need to be strong in JVM programming languages and event-driven architecture, in addition to AWS.</p>\n<p>Your responsibilities will include driving the design and implementation of new features, creating high-quality, maintainable code, and collaborating with other engineers. You&#39;ll also work cross-functionally with other teams, including data science, design, product, marketing, and analytics.</p>\n<p>To succeed in this role, you&#39;ll need 4+ years of development experience in software engineering, proficiency in at least one JVM programming language, and experience with major frameworks like Spring, Spring Boot. You&#39;ll also need hands-on experience with SQL databases, cloud environments, and streaming and messaging technologies.</p>\n<p>This is a full-time position with a salary range of $199,000-$244,000, plus equity and benefits. The role will be hybrid from our Vancouver office, with 2 days a week in the office required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e0ce11b-ddf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7747628","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$199,000-$244,000","x-skills-required":["JVM programming languages","Event-driven architecture","AWS","Spring, Spring Boot","SQL databases","Cloud environments","Streaming and messaging technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:51.955Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JVM programming languages, Event-driven architecture, AWS, Spring, Spring Boot, SQL databases, Cloud environments, Streaming and messaging technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":199000,"maxValue":244000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9166d234-4c5"},"title":"Solutions Architect - HPC/AI/ML","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers establish their Kubernetes environment, develop proofs of concept, onboard, and optimise workloads. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on AI/ML workloads within high-performance compute (HPC) environments.</p>\n<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimisation and suggesting suitable solutions.</p>\n<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9166d234-4c5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649044006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $225,000 SGD","x-skills-required":["cloud computing concepts","architecture","technologies","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)","Slurm","Kubernetes"],"x-skills-preferred":["code contributions to open-source inference frameworks","scripting and automation related to AI/ML workloads","building solutions across multi-cloud environments","client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures"],"datePosted":"2026-04-18T15:51:30.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing concepts, architecture, technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes, code contributions to open-source inference frameworks, scripting and automation related to AI/ML workloads, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09c520cf-f62"},"title":"Systems Engineer, Kernel","description":"<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>\n<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>\n<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>\n<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>\n<p>Our Team&#39;s Stack:</p>\n<ul>\n<li>Python, Go, bash/sh, C</li>\n</ul>\n<ul>\n<li>Prometheus, Victoria Metrics, Grafana</li>\n</ul>\n<ul>\n<li>Linux Kernel (custom build), Ubuntu</li>\n</ul>\n<ul>\n<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>\n</ul>\n<ul>\n<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>\n</ul>\n<p>Focus Areas:</p>\n<ul>\n<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>\n</ul>\n<ul>\n<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>\n</ul>\n<ul>\n<li>Stack-Wide Support – Ensure kernel support and stability across:</li>\n</ul>\n<ul>\n<li>Virtualization (KubeVirt, QEMU, vFIO)</li>\n</ul>\n<ul>\n<li>Container runtimes (containerd, nydus, kubelet)</li>\n</ul>\n<ul>\n<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>\n</ul>\n<ul>\n<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>\n</ul>\n<ul>\n<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>\n</ul>\n<p>About the role:</p>\n<ul>\n<li>Triage and fix kernel crashes and performance regressions.</li>\n</ul>\n<ul>\n<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>\n</ul>\n<ul>\n<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>\n</ul>\n<ul>\n<li>Implement diagnostics and tooling for kernel-level observability.</li>\n</ul>\n<ul>\n<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>\n</ul>\n<ul>\n<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>\n</ul>\n<ul>\n<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>\n</ul>\n<ul>\n<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>\n</ul>\n<ul>\n<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>\n</ul>\n<ul>\n<li>Experience working with kernel modules, drivers, and subsystems.</li>\n</ul>\n<ul>\n<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Contributions to the Linux kernel or related open-source projects.</li>\n</ul>\n<ul>\n<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>\n</ul>\n<ul>\n<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>\n</ul>\n<ul>\n<li>GPU/DPU bring-up and driver experience.</li>\n</ul>\n<ul>\n<li>Experience in HPC or large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Familiarity with QA/QE best practices</li>\n</ul>\n<ul>\n<li>Experience working in Cloud environments</li>\n</ul>\n<ul>\n<li>Experience as a software engineer writing large-scale applications</li>\n</ul>\n<ul>\n<li>Experience with machine learning is a huge bonus</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09c520cf-f62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4599319006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Linux kernel engineering","Systems-level development","C programming","Kernel modules","Drivers","Subsystems","Kernel debugging","Upstream contributions","Stack-wide support","Virtualization","Container runtimes","HPC/AI workloads","Kernel-hardware enablement","Performance & stability"],"x-skills-preferred":["Contributions to the Linux kernel","Networking stack expertise","GPU/DPU bring-up and driver experience","Experience in HPC or large-scale distributed systems","QA/QE best practices","Cloud environments","Software engineer writing large-scale applications","Machine learning"],"datePosted":"2026-04-18T15:51:21.252Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance & stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f296b6b0-e66"},"title":"Senior Software Security Engineer","description":"<p>Job Title: Senior Software Security Engineer</p>\n<p>About the Role: The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build security for large-scale AI clusters, implementing robust cloud security architecture including IAM, network segmentation, and encryption controls</li>\n</ul>\n<ul>\n<li>Design secure-by-design workflows, secure CI/CD pipelines across our services, help build secure cloud infrastructure, with expertise in various cloud environments, Kubernetes security, container orchestration and identity management</li>\n</ul>\n<ul>\n<li>Ship and operate secure, high-reliability services using Infrastructure-as-Code (IaC) practices and GitOps workflows</li>\n</ul>\n<ul>\n<li>Apply deep expertise in threat modeling and risk assessment to secure complex multi cloud environments</li>\n</ul>\n<ul>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5-15+ years of software engineering experience implementing and maintaining critical systems at scale</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science/Software Engineering or equivalent industry experience</li>\n</ul>\n<ul>\n<li>Strong software engineering skills in Python or at least one systems language (Go, Rust, C/C++)</li>\n</ul>\n<ul>\n<li>Experience managing infrastructure at scale with DevOps and cloud automation best practices</li>\n</ul>\n<ul>\n<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>\n</ul>\n<ul>\n<li>Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n</ul>\n<ul>\n<li>Outstanding communication skills, translating technical concepts effectively across all organizational levels</li>\n</ul>\n<ul>\n<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>\n</ul>\n<ul>\n<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>\n</ul>\n<ul>\n<li>Low ego, high empathy engineer who attracts talent and supports diverse, inclusive teams</li>\n</ul>\n<ul>\n<li>Experience supporting fast-paced startup engineering teams</li>\n</ul>\n<ul>\n<li>Passionate about AI safety and alignment, with keen interest in making AI systems more interpretable and aligned with human values</li>\n</ul>\n<p>Salary: The annual compensation range for this role is £240,000-£325,000 GBP.</p>\n<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: £240,000-£325,000 GBP Required Skills:</p>\n<ul>\n<li>Cloud security architecture</li>\n<li>IAM</li>\n<li>Network segmentation</li>\n<li>Encryption controls</li>\n<li>Kubernetes security</li>\n<li>Container orchestration</li>\n<li>Identity management</li>\n<li>Infrastructure-as-Code (IaC)</li>\n<li>GitOps</li>\n<li>Threat modeling</li>\n<li>Risk assessment</li>\n<li>DevOps</li>\n<li>Cloud automation</li>\n<li>Python</li>\n<li>Go</li>\n<li>Rust</li>\n<li>C/C++</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Secure-by-design workflows</li>\n<li>CI/CD pipelines</li>\n<li>Secure cloud infrastructure</li>\n<li>Cloud environments</li>\n<li>Containerization</li>\n<li>Identity and access management</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f296b6b0-e66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5022845008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£240,000-£325,000 GBP","x-skills-required":["Cloud security architecture","IAM","Network segmentation","Encryption controls","Kubernetes security","Container orchestration","Identity management","Infrastructure-as-Code (IaC)","GitOps","Threat modeling","Risk assessment","DevOps","Cloud automation","Python","Go","Rust","C/C++"],"x-skills-preferred":["Secure-by-design workflows","CI/CD pipelines","Secure cloud infrastructure","Cloud environments","Containerization","Identity and access management"],"datePosted":"2026-04-18T15:51:17.687Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud security architecture, IAM, Network segmentation, Encryption controls, Kubernetes security, Container orchestration, Identity management, Infrastructure-as-Code (IaC), GitOps, Threat modeling, Risk assessment, DevOps, Cloud automation, Python, Go, Rust, C/C++, Secure-by-design workflows, CI/CD pipelines, Secure cloud infrastructure, Cloud environments, Containerization, Identity and access management","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95061695-858"},"title":"Director of Engineering, Media & Entertainment (M&E)","description":"<p>CoreWeave is seeking a Director of Engineering, Media &amp; Entertainment (M&amp;E) to lead the development of next-generation cloud platforms and tools that power modern content creation workflows. This role will drive the engineering strategy and execution for solutions that support visual effects (VFX), animation, rendering, and post-production pipelines used by studios, artists, and creative teams worldwide.</p>\n<p>As a senior engineering leader, you will build and lead high-performing engineering teams responsible for designing scalable infrastructure, developer tools, and user-facing systems that enable creative professionals to run complex production workloads in the cloud. You will collaborate closely with product, design, infrastructure, and customer teams to translate real-world production workflows into reliable, high-performance software platforms.</p>\n<p>This role combines deep engineering leadership with domain expertise in M&amp;E workflows, ensuring that the platform delivers exceptional performance, reliability, and usability for demanding creative workloads.</p>\n<p><strong>Leadership &amp; Strategy</strong></p>\n<p>-Build and scale high-performing engineering teams focused on cloud platforms for media production workloads including rendering, simulation, and content processing. -Recruit, mentor, and develop engineering managers and senior engineers while fostering a culture of innovation, accountability, and collaboration. -Define and execute the long-term engineering strategy for Media &amp; Entertainment products and services. -Partner with Product and Design leaders to translate industry workflows and customer needs into scalable platform capabilities. -Establish engineering best practices for reliability, security, observability, and operational excellence. -Drive roadmap alignment between engineering initiatives and strategic business objectives.</p>\n<p><strong>Technical Leadership</strong></p>\n<p>-Lead the design and development of scalable backend services, APIs, and developer interfaces that power M&amp;E cloud workflows. -Build platforms that support demanding workloads such as rendering, asset processing, and distributed compute pipelines. -Drive architecture decisions for cloud-native systems leveraging technologies such as Kubernetes, distributed services, and infrastructure-as-code. -Ensure the platform enables self-service provisioning, automation, and repeatable workflows for production pipelines. -Establish engineering standards around performance, scalability, and security for enterprise-grade SaaS/PaaS systems. -Oversee system reliability and operational readiness through clear SLOs, monitoring, and runbook-driven on-call practices.</p>\n<p><strong>Product &amp; Workflow Collaboration</strong></p>\n<p>-Work closely with product leadership to define technical requirements aligned with real customer workflows in animation, VFX, and media production. -Engage directly with studios, artists, and technical directors to understand pipeline challenges and incorporate feedback into product development. -Translate industry needs into clear engineering priorities and technical roadmaps. -Guide development teams through product milestones including specification, development, testing, and release. -Ensure engineering efforts balance customer requirements, technical feasibility, and business goals.</p>\n<p>Customer and industry collaboration is critical in identifying workflow needs and transforming them into actionable development plans for engineering teams.</p>\n<p><strong>Operational Excellence</strong></p>\n<p>-Implement engineering processes that support scalable development, including CI/CD pipelines, testing strategies, and code review standards. -Manage development timelines and resource allocation across multiple engineering teams. -Track key operational and customer metrics including performance, reliability, and cost efficiency. -Drive continuous improvement in engineering productivity and system performance. -Partner with QA, support, and customer success teams to ensure high-quality releases and strong user satisfaction.</p>\n<p><strong>Who You Are:</strong></p>\n<p><strong>Required Qualifications</strong></p>\n<p>-10+ years of software engineering experience, including leadership of engineering teams and managers -Proven experience building and scaling cloud-based platforms or distributed systems. -Strong understanding of cloud infrastructure, microservices architecture, and automation technologies. -Experience delivering enterprise SaaS or PaaS products used by external customers. -Excellent leadership, communication, and cross-functional collaboration skills. -Ability to operate strategically while remaining deeply technical and hands-on with architecture decisions.</p>\n<p><strong>Preferred Qualifications</strong></p>\n<p>-Experience building platforms or tools for Media &amp; Entertainment workflows such as VFX, animation, rendering, or post-production pipelines. -Familiarity with industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan. -Experience designing APIs, developer platforms, or automation frameworks used by technical users. -Knowledge of GPU-accelerated compute workloads and distributed rendering systems. -Experience working with Kubernetes, infrastructure-as-code, and large-scale cloud environments.</p>\n<p><strong>What Success Looks Like</strong></p>\n<p>-Engineering teams delivering reliable, scalable platforms used by media studios and creative teams globally. -Clear alignment between product vision, customer workflows, and engineering execution. -Platforms capable of supporting large-scale production workloads with high performance and reliability. -Strong engineering culture focused on innovation, collaboration, and operational excellence.</p>\n<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match.</p>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<p>-Be Curious at Your Core -Act Like an Owner -Empower Employees -Deliver Best-in-Class Client Experiences -Achieve More Together</p>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95061695-858","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4666156006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$206,000 - $303,000","x-skills-required":["Cloud infrastructure","Microservices architecture","Automation technologies","Enterprise SaaS or PaaS products","Leadership","Communication","Cross-functional collaboration","Strategic decision-making"],"x-skills-preferred":["Media & Entertainment workflows","VFX, animation, rendering, or post-production pipelines","Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan","APIs, developer platforms, or automation frameworks","GPU-accelerated compute workloads and distributed rendering systems","Kubernetes, infrastructure-as-code, and large-scale cloud environments"],"datePosted":"2026-04-18T15:49:14.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure, Microservices architecture, Automation technologies, Enterprise SaaS or PaaS products, Leadership, Communication, Cross-functional collaboration, Strategic decision-making, Media & Entertainment workflows, VFX, animation, rendering, or post-production pipelines, Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan, APIs, developer platforms, or automation frameworks, GPU-accelerated compute workloads and distributed rendering systems, Kubernetes, infrastructure-as-code, and large-scale cloud environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":206000,"maxValue":303000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ece4c581-f94"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ece4c581-f94","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7774364","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)","x-skills-required":["PostgreSQL","MySQL","Linux systems","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.)","Cloud environments (AWS or GCP)"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming (Kafka, Debezium)","Change data capture"],"datePosted":"2026-04-18T15:48:00.158Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9aa81908-c43"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9aa81908-c43","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437974","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington","x-skills-required":["PostgreSQL","MySQL","Linux","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture","Open-source PostgreSQL ecosystem"],"datePosted":"2026-04-18T15:47:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f38b4fcf-88f"},"title":"Staff Software Engineer, Organization","description":"<p>We are looking for a Staff Software Engineer to join our Organizations team. As a Staff Software Engineer, you will help drive architectural vision and strategy on the team to design and deliver powerful new enterprise functionality for our SaaS customers. You will identify and implement strategic technical improvements to our codebase and architecture, orchestrate and lead major technical projects, and mentor and coach less experienced engineers on sound engineering practices and technical leadership.</p>\n<p>You will work closely with the Product Manager and Product Designer to define the look, feel, and functionality of new features and review customer feedback. You will also serve as a subject matter expert on all building scalable, reliable, and maintainable distributed systems.</p>\n<p>To be successful in this role, you will need to have solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems. You will also need to have worked on projects that required close collaboration with external teams and have experience making those a success.</p>\n<p>You will be a good mentor and communicator, able to explain complex concepts simply in person or in a document. You will know that while an engineer can write code, teams collaborate to ship successful products.</p>\n<p>You will have solid previous experience with Node.js (JavaScript or Typescript) to build scalable backend services and creating and maintaining public and internal APIs. You will also have built frontend and full-stack apps and know what approach to use when.</p>\n<p>You will have a good understanding of SQL databases and know how to debug and optimize table and query structure for performance under load. You will also have experience with Docker and cloud environments (AWS and Azure preferred).</p>\n<p>Bonus points for experience with Kubernetes, knowledge of authentication protocols such as OAuth2, OIDC, SAML, understanding of event-driven architectures, especially Apache Kafka, understanding and experience of DevOps culture, and knowledge of security engineering and application security.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f38b4fcf-88f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7560775","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"€74.000-€102.000 EUR","x-skills-required":["Node.js","JavaScript","Typescript","SQL databases","Docker","cloud environments","AWS","Azure","Kubernetes","authentication protocols","OAuth2","OIDC","SAML","event-driven architectures","Apache Kafka","DevOps culture","security engineering","application security"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:13.279Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Barcelona, Spain"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Node.js, JavaScript, Typescript, SQL databases, Docker, cloud environments, AWS, Azure, Kubernetes, authentication protocols, OAuth2, OIDC, SAML, event-driven architectures, Apache Kafka, DevOps culture, security engineering, application security"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_83fb6b32-83e"},"title":"Senior OCI and Fusion Administrator","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>This role is responsible for the technical administration, environment management, and ongoing platform integrity of the Oracle Fusion ERP Cloud environment, operating as a pure technical administrator for Oracle Fusion Applications and the underlying Cloud Infrastructure.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Environment Management &amp; Maintenance: Own the technical management of all Fusion environments, including executing scheduled environment refreshes, cloning instances, managing environment usage, and ensuring system configuration baselines.</li>\n</ul>\n<ul>\n<li>Cloud Update Execution: Participate in the technical preparation and execution of Oracle’s mandatory quarterly cloud updates, including performing pre-update checks and technical smoke testing post-update.</li>\n</ul>\n<ul>\n<li>Platform Stability &amp; Governance: Own the non-functional requirements for the Oracle Cloud environment, including security architecture, role design governance, and performance benchmarking. Enforce technical configuration control standards.</li>\n</ul>\n<ul>\n<li>Security Administration: Provide security administration and support for the all Oracle Fusion Applications, PaaS and DBaaS platforms, focusing on security key management, monitoring dashboards, and assisting with artifact deployment.</li>\n</ul>\n<ul>\n<li>Risk Management Cloud: Own the Oracle Fusion Risk Management Cloud service as the technical owner and provide support to the Compliance business teams</li>\n</ul>\n<ul>\n<li>Technical Support &amp; Troubleshooting: Provide Level 2/3 technical support for environment-related issues, access problems, and deployment failures. Serve as an escalation point to conduct root cause analysis for platform-level incidents.</li>\n</ul>\n<p>Required Qualifications</p>\n<ul>\n<li>5+ years focusing on the technical administration/support within an Oracle Fusion environment.</li>\n</ul>\n<ul>\n<li>Expert-level knowledge of managing Oracle Fusion Cloud environments, including environment refresh and cloning processes.</li>\n</ul>\n<ul>\n<li>Deep technical familiarity with Oracle Cloud Infrastructure administration and monitoring.</li>\n</ul>\n<ul>\n<li>Strong understanding of security architecture, Oracle Fusion Risk Management Cloud, role design governance, and performance management within a cloud ERP.</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Experience in an organization transitioning to Oracle Cloud Ecosystem.</li>\n</ul>\n<ul>\n<li>Hands-on experience with various OCI PaaS Toolsets.</li>\n</ul>\n<ul>\n<li>Exposure to the Data Center Infrastructure industry</li>\n</ul>\n<ul>\n<li>Relevant professional product/functional certifications (e.g., Oracle Cloud Infrastructure and Security certifications)</li>\n</ul>\n<ul>\n<li>Skilled in administering DevOps tools like Flexagon FlexDeploy and using Opal IGA tool</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_83fb6b32-83e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7609741","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Oracle Fusion Cloud environments","environment refresh and cloning processes","Oracle Cloud Infrastructure administration","security architecture","role design governance","performance management within a cloud ERP"],"x-skills-preferred":["Flexagon FlexDeploy","Opal IGA tool","DevOps tools","OCI PaaS Toolsets","Data Center Infrastructure industry"],"datePosted":"2026-04-18T15:47:03.186Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Oracle Fusion Cloud environments, environment refresh and cloning processes, Oracle Cloud Infrastructure administration, security architecture, role design governance, performance management within a cloud ERP, Flexagon FlexDeploy, Opal IGA tool, DevOps tools, OCI PaaS Toolsets, Data Center Infrastructure industry"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_146ddf7d-edd"},"title":"Network Security Engineer","description":"<p><strong>About the Role</strong></p>\n<p>We are seeking a seasoned Senior Network Security Engineer to join our dynamic security team. The ideal candidate will possess deep expertise in network security technologies, focusing on switching and routing systems within cloud-native and AI-focused infrastructure.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Serve as a subject matter expert in network security, particularly firewalls, VPNs, IDS/IPS, routing protocols (e.g., BGP, OSPF), and switching technologies.</li>\n<li>Manage and update firewall configurations across our enterprise network to align with operational and security needs.</li>\n<li>Deploy new firewalls, switches, routers, and network security devices in response to evolving threats and demands.</li>\n<li>Develop and propose innovative network security solutions to address operational challenges in routing and switching environments.</li>\n<li>Enhance security processes through thorough documentation and change management.</li>\n<li>Act as the primary resolver for complex network security issues, including escalation support.</li>\n<li>Ensure network security systems, switches, and routers are up-to-date with patches, firmware, and maintenance.</li>\n<li>Monitor and respond to security events in cloud environments (e.g., AWS, GCP, Azure, Datacenter), with emphasis on network traffic analysis.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, Information Systems, or a related field.</li>\n<li>4+ years of experience in network security engineering, with hands-on focus on switching and routing.</li>\n<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>\n<li>Strong understanding of network security principles, protocols (e.g., TCP/IP, VLANs, ACLs), and best practices for secure routing and switching.</li>\n<li>Proficiency in at least one major cloud platform (AWS, GCP, or Azure) and its network security services (e.g., VPCs, Security Groups).</li>\n<li>Experience with network analysis tools such as Wireshark, tcpdump; and vendors including Cisco, Juniper, Palo Alto Networks.</li>\n<li>Familiarity with scripting languages (e.g., Python, Bash) for automation of network security tasks.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Relevant network-specific certifications (e.g., CCNP Security, CCIE Security, JNCIP-SEC, PCNSE).</li>\n<li>Experience in multi-cloud environments and Infrastructure as Code tools like Terraform for network provisioning.</li>\n<li>Knowledge of DevSecOps practices tailored to network security integration.</li>\n<li>Experience building custom tools or integrations for enhancing network security operations.</li>\n<li>Interest in leveraging AI for network threat detection and automation.</li>\n<li>Contributions to open-source projects in network security or related tools.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_146ddf7d-edd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4800712007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["firewalls","VPNs","IDS/IPS","routing protocols","switching technologies","cloud platforms","network security services","network analysis tools","scripting languages"],"x-skills-preferred":["CCNP Security","CCIE Security","JNCIP-SEC","PCNSE","multi-cloud environments","Infrastructure as Code","DevSecOps","custom tools","AI for network threat detection"],"datePosted":"2026-04-18T15:46:53.978Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"firewalls, VPNs, IDS/IPS, routing protocols, switching technologies, cloud platforms, network security services, network analysis tools, scripting languages, CCNP Security, CCIE Security, JNCIP-SEC, PCNSE, multi-cloud environments, Infrastructure as Code, DevSecOps, custom tools, AI for network threat detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_871e8461-cb8"},"title":"AI Native Account Executive","description":"<p>Job Title: AI Native Account Executive</p>\n<p>At CoreWeave, we&#39;re building the next generation public cloud for accelerated workloads, supporting cutting-edge Machine Learning and Batch Processing use cases. As an Account Executive, you will own the full sales cycle from prospecting through close and expansion. You will manage a pipeline of opportunities, forecast revenue accurately, and consistently meet or exceed quota targets.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Engage with both technical and business stakeholders to communicate CoreWeave&#39;s value proposition, tailoring solutions to customer needs</li>\n<li>Collaborate cross-functionally to ensure customer success and identify growth opportunities across accounts</li>\n<li>Manage a pipeline of opportunities, forecast revenue accurately, and consistently meet or exceed quota targets</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in B2B sales and/or account management</li>\n<li>Proven track record of consistently exceeding quota targets</li>\n<li>Experience managing and forecasting a sales pipeline using Salesforce.com</li>\n<li>Ability to communicate complex technical concepts (e.g., cloud infrastructure, ML workloads) to both technical and business audiences</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience selling cloud, infrastructure, or AI/ML-related solutions</li>\n<li>Familiarity with competitive cloud environments and positioning differentiated offerings</li>\n</ul>\n<p>Why CoreWeave?</p>\n<ul>\n<li>We work hard, have fun, and move fast!</li>\n<li>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on</li>\n<li>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning</li>\n</ul>\n<p>Total Rewards Package:</p>\n<ul>\n<li>Base salary range: $165,000 to $200,000</li>\n<li>Uncapped commissions and On Target Earnings (OTE) of $330,000–$400,000</li>\n<li>Comprehensive benefits program, including medical, dental, and vision insurance, 401(k) with a generous employer match, and flexible PTO</li>\n</ul>\n<p>What We Offer:</p>\n<ul>\n<li>A competitive salary and benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>A dynamic and supportive work environment</li>\n</ul>\n<p>If you&#39;re a motivated and results-driven individual who is passionate about sales and customer success, we encourage you to apply for this exciting opportunity!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_871e8461-cb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4647796006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $200,000","x-skills-required":["sales","account management","cloud infrastructure","machine learning","batch processing","customer success","pipeline management","forecasting","quota targets"],"x-skills-preferred":["cloud sales","infrastructure sales","AI/ML sales","competitive cloud environments","differentiated offerings"],"datePosted":"2026-04-18T15:46:30.986Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"sales, account management, cloud infrastructure, machine learning, batch processing, customer success, pipeline management, forecasting, quota targets, cloud sales, infrastructure sales, AI/ML sales, competitive cloud environments, differentiated offerings","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5f7c499a-533"},"title":"Senior Software Engineer, Security","description":"<p>As a Senior Software Engineer in the Security organization at CoreWeave, you will design, build and deploy services, platforms and tools that help provide common foundational capabilities that various security programs and initiatives rely on to keep CoreWeave secure.</p>\n<p>Automation to eliminate manual steps involved in understanding security risks, remediating and preventing them would be the charter. The work sits at the intersection of engineering systems and regulatory requirements, translating requirements into scalable, reliable, production grade infrastructure. Often this means building production infrastructure from scratch in many cases, and would need end to end ownership of systems including design, development, testing and deployment including implementing effective integration pipelines (CI/CD) and offering a reliable production system that should be highly available and function at scale.</p>\n<p>You will partner closely with various security teams including GRC, platform engineering, and security domain teams to translate business needs into durable technical needs, while retaining full engineering ownership of how those systems are designed, built, and operated.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Design and build scalable systems.</li>\n<li>Develop control integrations and data pipelines to normalize security telemetry across IAM, logs, scanners, and CCM/GRC tools.</li>\n<li>Build metrics engines, dashboards, and insights pipelines that provide real-time visibility into compliance health and emerging risks.</li>\n</ul>\n<p>On this team, you will:</p>\n<ul>\n<li>Tackle security &amp; compliance puzzles at cutting-edge scale and complexity</li>\n<li>Collaborate with brilliant engineers who are redefining compliance adherence for cloud infrastructure.</li>\n<li>You&#39;ll have the freedom and responsibility to innovate, experiment, and influence how we establish assurance pipelines.</li>\n</ul>\n<p>Investing in our people is one of our top priorities, and we value candidates who can bring their diversified experiences to our teams. Here are some qualities we’ve found compatible with our team. We&#39;d love to talk about whether this aligns with your experience and interests and what you’re excited to work on next.</p>\n<p>Who You Are:</p>\n<p>Minimum Qualifications</p>\n<ul>\n<li>A Bachelor’s degree in Information Security, Computer Science, or a related field or equivalent job experience.</li>\n<li>At least 7+ years of hands-on experience in programming languages like Go.</li>\n<li>At least 3+ years of hands-on experience deploying and managing Kubernetes clusters in a production environment.</li>\n<li>Experience building high qps and critical distributed systems.</li>\n<li>Familiarity with modern CI/CD practices and Infrastructure-as-Code tooling.</li>\n<li>Proven experience building and deploying containerized applications.</li>\n<li>Strong experience with technical architectures involving data flows, event driven architecture, access controls, retention, and third-party integrations.</li>\n<li>Strong hands-on experience with cloud infrastructure (AWS, GCP).</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Information Security Engineering experience.</li>\n<li>Expertise in major compliance and security frameworks (SOC 2, ISO 27001, PCI DSS, HIPAA, FedRAMP, NIST, CSF).</li>\n<li>Background in building automation for distributed cloud environments at scale.</li>\n<li>Experience with remote-access solutions like Teleport (real bonus points if you’ve submitted PRs on their product).</li>\n<li>Understanding of the SSO protocols, specifically OIDC and SAML.</li>\n<li>Hands-on experience with PKI and mTLS.</li>\n</ul>\n<p>If you&#39;re eager to elevate compliance into a creative, strategic force within a fast-paced, forward-thinking company, we&#39;d love to hear from you!</p>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5f7c499a-533","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4651859006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Go","Kubernetes","Cloud infrastructure","CI/CD practices","Infrastructure-as-Code tooling","Containerized applications","Technical architectures","Data flows","Event driven architecture","Access controls","Retention","Third-party integrations"],"x-skills-preferred":["Information Security Engineering","Compliance and security frameworks","Automation for distributed cloud environments","Remote-access solutions","SSO protocols","PKI and mTLS"],"datePosted":"2026-04-18T15:45:57.955Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Kubernetes, Cloud infrastructure, CI/CD practices, Infrastructure-as-Code tooling, Containerized applications, Technical architectures, Data flows, Event driven architecture, Access controls, Retention, Third-party integrations, Information Security Engineering, Compliance and security frameworks, Automation for distributed cloud environments, Remote-access solutions, SSO protocols, PKI and mTLS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_551a2878-21e"},"title":"Staff Program Manager, Federal Regulated Environments","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p>About the Role</p>\n<p>We are seeking an experienced Staff Program Manager to lead and support a strategic initiative focused on expanding our operations into highly regulated federal environments. This critical role will serve as the primary driver for ensuring program success while navigating complex compliance requirements and maintaining strong relationships with federal stakeholders.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Lead and manage program execution for initiatives entering highly regulated federal environments, ensuring adherence to compliance frameworks and regulatory requirements</li>\n<li>Serve as primary liaison between functional and technical teams, federal partners and sponsors, and internal stakeholders to ensure alignment on program objectives, deliverables, and timelines</li>\n<li>Develop and maintain comprehensive program plans, including risk management strategies, resource allocation, and milestone tracking</li>\n<li>Create and deliver executive-level status reports, briefings, and presentations for senior leadership</li>\n<li>Coordinate cross-functional teams to ensure seamless integration of compliance requirements into all program workstreams</li>\n<li>Identify and mitigate program risks while proactively developing contingency plans</li>\n<li>Manage stakeholder expectations and communications across all levels of the organisation and with relevant federal partners as needed</li>\n<li>Collaborate and align with Okta leaders, customer stakeholders, regulators to ensure Okta is meeting and/or exceeding expectations and providing mission impact</li>\n</ul>\n<p>Required Qualifications</p>\n<ul>\n<li>Minimum 8 years of program or project management experience in federal regulated industries</li>\n<li>Active TS/SCI clearance (required at time of hire)</li>\n<li>Demonstrated experience understanding compliance requirements as part of major projects or programs in regulated environments</li>\n<li>Experience with federal compliance frameworks (CMMC, ) and Federal authorisations (FedRamp High, IL4/5/6/7)</li>\n<li>Experience in bringing commercial software into air-gapped highly regulated Cloud Environments, including AWS.</li>\n<li>Proven track record of successful interaction and relationship management with federal government customers and partners</li>\n<li>Strong experience developing executive-level status reports, briefings, and communications for C-suite and senior government officials</li>\n<li>Must be located within commuting distance of Washington, DC</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Active Project Management Professional (PMP) certification or equivalent (PMI-ACP, PRINCE2, etc.)</li>\n<li>Background in software/technology programs serving federal agencies</li>\n<li>Previous experience with DoD, Intelligence Community, or civilian federal agencies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_551a2878-21e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7707521","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$162,000-$244,000 USD","x-skills-required":["program management","project management","federal regulated industries","TS/SCI clearance","compliance requirements","federal compliance frameworks","federal authorisations","commercial software","air-gapped highly regulated Cloud Environments","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:37.699Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"program management, project management, federal regulated industries, TS/SCI clearance, compliance requirements, federal compliance frameworks, federal authorisations, commercial software, air-gapped highly regulated Cloud Environments, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":162000,"maxValue":244000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eda84ece-394"},"title":"Security Engineer, Detection & Response","description":"<p>At Anthropic, we are pioneering new frontiers in AI that have the potential to greatly benefit society. However, developing advanced AI also comes with risks if not properly safeguarded. That&#39;s why we are seeking an exceptional Detection and Response engineer that will be on the frontlines to build solutions to monitor for threats, rapidly investigate incidents, and coordinate response efforts with other teams.</p>\n<p>In this role, you will have the opportunity to shape our security capabilities from the ground up alongside our world-class research and security teams. You will lead cybersecurity Incident Response efforts covering diverse domains from external attacks to insider threats involving all layers of Anthropic&#39;s technology stack.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing and deploying novel tooling that may leverage Large Language Models to enhance detection, investigation, and response capabilities</li>\n<li>Creating and optimizing detections, playbooks, and workflows to quickly identify and respond to potential incidents</li>\n<li>Reviewing Incident Response metrics and procedures and driving continuous improvement</li>\n<li>Working cross-functionally with other security and engineering teams</li>\n</ul>\n<p>Note: This position will require participation in an on-call rotation.</p>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>3+ years of software engineering experience, with security experience a plus</li>\n<li>5+ years of detection engineering, incident response, or threat hunting experience</li>\n<li>A solid understanding of cloud environments and operations</li>\n<li>Experience working with engineering teams in a SaaS environment</li>\n<li>Exceptional communication and collaboration skills</li>\n<li>An ability to lead projects with little guidance</li>\n<li>The ability to pick up new languages and technologies quickly</li>\n<li>Experience handling security incidents and investigating anomalies as part of a team</li>\n<li>Knowledge of EDR, SIEM, SOAR, or related security tools</li>\n</ul>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>Performing security operations or investigations involving large-scale Kubernetes environments</li>\n<li>A high level of proficiency in Python and query languages such as SQL</li>\n<li>Analyzing attack behavior and prototyping high-quality detections</li>\n<li>Threat intelligence, malware analysis, infrastructure as code, detection engineering, or forensics</li>\n<li>Contributing to a high-growth startup environment</li>\n</ul>\n<p>If you&#39;re interested in this role, please submit an application, even if you don&#39;t believe you meet every single qualification. We encourage diversity and inclusion in our hiring process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eda84ece-394","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4982193008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["software engineering","security experience","detection engineering","incident response","threat hunting","cloud environments","operations","EDR","SIEM","SOAR"],"x-skills-preferred":["Python","SQL","Kubernetes","Large Language Models","playbooks","workflows","continuous improvement","collaboration","leadership","new languages and technologies"],"datePosted":"2026-04-18T15:45:14.042Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, security experience, detection engineering, incident response, threat hunting, cloud environments, operations, EDR, SIEM, SOAR, Python, SQL, Kubernetes, Large Language Models, playbooks, workflows, continuous improvement, collaboration, leadership, new languages and technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aae5c27d-20b"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aae5c27d-20b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7436028","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture"],"datePosted":"2026-04-18T15:44:37.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fd64db3e-49f"},"title":"Staff Software Engineer – Customer Experience Intelligence (CXI)","description":"<p>At Databricks, we&#39;re shaping the future of how customers experience support at scale. As the Staff Technical Lead for Customer Experience Intelligence, you&#39;ll design intelligent, AI-powered systems that make support faster, smarter, and more effortless.</p>\n<p>In this role, you&#39;ll have end-to-end ownership of the architecture and technical strategy behind automation and agentic workflows that reduce mean time to mitigate (MTTM), boost quality, and enable our Support organization to scale impact without scaling headcount. You&#39;ll work hands-on with teams across Support, Product, and Platform Engineering to build seamless systems that anticipate customer needs before they arise.</p>\n<p>You&#39;ll lead the technical foundation that transforms how customers experience support , where issues are auto-diagnosed, solutions are delivered instantly, and engineers focus their time on the toughest challenges. Your success will mean customers moving faster, trusting Databricks deeper, and feeling the impact of your systems every day.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Owning the technical vision and architecture for Databricks&#39; Support Automation and Tooling ecosystem</li>\n<li>Leading hands-on development of automation to improve customer experience and Support scalability</li>\n<li>Driving rapid, iterative development while upholding quality, safety, and reliability standards</li>\n<li>Designing agentic workflows that evolve from human-in-the-loop to fully automated systems</li>\n<li>Implementing observability, transparency, and rollback mechanisms for AI-driven decisions</li>\n<li>Acting as the primary technical interface between Support, Product, and Platform Engineering to align technical roadmaps and unblock dependencies</li>\n<li>Setting a high engineering bar for quality, reliability, and maintainability in line with Databricks standards</li>\n<li>Mentoring engineers and SMEs across Software and Support Engineering functions</li>\n</ul>\n<p>We&#39;re looking for someone with:</p>\n<ul>\n<li>A BS or higher degree in Computer Science or a related field</li>\n<li>Technical leadership experience in large projects similar to those described, including automation tooling, distributed systems, and APIs</li>\n<li>Extensive full-stack development experience</li>\n<li>Proven success designing and deploying production-grade automation in complex technical environments</li>\n<li>Hands-on experience with ML-assisted systems, decision support, or agentic automation</li>\n<li>Deep familiarity with distributed data platforms, developer tooling, and large-scale infrastructure systems</li>\n<li>Understanding of multi-cloud environments (AWS, Azure, GCP), compliance, and security constraints</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $190,000-$261,250 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fd64db3e-49f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8416959002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$190,000-$261,250 USD","x-skills-required":["Automation tooling","Distributed systems","APIs","Full-stack development","ML-assisted systems","Decision support","Agentic automation","Distributed data platforms","Developer tooling","Large-scale infrastructure systems","Multi-cloud environments","Compliance","Security constraints"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:19.005Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Automation tooling, Distributed systems, APIs, Full-stack development, ML-assisted systems, Decision support, Agentic automation, Distributed data platforms, Developer tooling, Large-scale infrastructure systems, Multi-cloud environments, Compliance, Security constraints","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":261250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7203380b-a7c"},"title":"Software Engineer (L3) Infrastructure","description":"<p>We are seeking a Software Engineer (L3) Infrastructure to join our Developer Platform Experience team under Platform Engineering. As a key member of our team, you will help users interact with Twilio&#39;s internal developer platform, manage our software taxonomy and cloud infrastructure inventory, accelerate developer productivity via self-service tools, and drive adoption of engineering best practices throughout the company.</p>\n<p>In this role, you will develop, test, and deploy backend, frontend, and client-side applications for internal use at Twilio. You will collaborate with teammates and guest contributors via peer reviews, planning exercises, and pair programming. You will also mentor junior engineers as necessary, write tickets, testing plans, and runbooks for the team, as well as internal documentation for users.</p>\n<p>You will support internal users and ensure system uptime by participating in a 24x7 weekly on-call rotation. You will continuously improve Twilio&#39;s internal developer platform interfaces, local development tools, and platform onboarding processes. You will independently own medium-sized features, authoring specifications and designs for features of moderate complexity.</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7203380b-a7c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7767260","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"CAD $132,640.00 - CAD $165,800.00","x-skills-required":["Typescript","Python","Go","Terraform","Bash","AWS cloud environment","Relational database concepts and operations","5+ years of full-time job experience in a software engineering role"],"x-skills-preferred":["Prior experience working with a platform engineering focus in a software engineering organization","Strong opinions on developer experience and local development best practices","Familiarity with front-end web application development and frameworks such as React, Angular, or Vue","Familiarity with internal developer platform frameworks such as Backstage, OpsLevel, Cortex, or Battlestar","Fluency with AI platforms such as Claude, ChatGPT, and/or Copilot to accelerate software development"],"datePosted":"2026-04-18T15:44:05.160Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Typescript, Python, Go, Terraform, Bash, AWS cloud environment, Relational database concepts and operations, 5+ years of full-time job experience in a software engineering role, Prior experience working with a platform engineering focus in a software engineering organization, Strong opinions on developer experience and local development best practices, Familiarity with front-end web application development and frameworks such as React, Angular, or Vue, Familiarity with internal developer platform frameworks such as Backstage, OpsLevel, Cortex, or Battlestar, Fluency with AI platforms such as Claude, ChatGPT, and/or Copilot to accelerate software development","baseSalary":{"@type":"MonetaryAmount","currency":"CAD","value":{"@type":"QuantitativeValue","minValue":132640,"maxValue":165800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_49214f94-4ba"},"title":"Senior Manager, Infrastructure Data Science","description":"<p>We are looking for a Senior Manager, Infrastructure Data Science to shape the future of Databricks infrastructure through data science. You will tackle some of the most complex challenges related to capacity planning, performance optimisation, reliability engineering, infrastructure efficiency, and customer experience.</p>\n<p>As a Senior Manager, you will lead a team of data scientists and work directly in partnership with engineering leaders to empower them with data-driven insights and solutions. You will promote a data-driven approach to infrastructure decisions, influencing stakeholders across engineering, and support to leverage data science insights for high-impact, aligned strategies.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Thought leadership and strategic guidance on infrastructure planning, balancing current needs with future growth projections to ensure scalability and cost-effectiveness.</li>\n<li>Implement data-driven solutions to identify, predict, and mitigate infrastructure risks and failures, reducing downtime and improving system reliability and performance, directly impacting end-user satisfaction and operational continuity.</li>\n<li>Spearhead analyses to improve resource utilisation efficiency, identifying and eliminating inefficiencies across infrastructure usage, resulting in cost savings and optimised performance.</li>\n<li>Establish data frameworks that empower support teams to troubleshoot and resolve product issues faster, decreasing response times and enhancing customer experience and support quality.</li>\n<li>Mentor and manage a team of data scientists, instilling best practices in data science, engineering, and fostering a collaborative environment focused on innovative, scalable infrastructure solutions.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of infrastructure data science, machine learning, advanced analytics experience in high-velocity, high-growth companies.</li>\n<li>5+ years of management experience hiring and developing teams.</li>\n<li>Experience developing data science, analytics, and machine learning and AI products and capabilities in a cloud environment.</li>\n<li>Knowledge of statistics and rigorous analytical techniques.</li>\n<li>Experience with data visualisation tools, knowledge of data engineering, data modelling, and big data technologies.</li>\n<li>Leadership skills and experience to lead across functional and organisational lines.</li>\n<li>Strong communication skills to explain and evangelise analytics and data science to executives and the senior management team.</li>\n<li>Bias to action and passion for delivering high-quality data solutions.</li>\n<li>A passion for problem-solving and comfort with ambiguity.</li>\n<li>MS or Ph.D. in quantitative fields (Statistics, Math, CS or Engineering).</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilising the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $228,600-$314,250 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_49214f94-4ba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7641390002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$228,600-$314,250 USD","x-skills-required":["infrastructure data science","machine learning","advanced analytics","cloud environment","statistics","data visualisation tools","data engineering","data modelling","big data technologies","leadership skills","communication skills","bias to action","passion for problem-solving","comfort with ambiguity","MS or Ph.D. in quantitative fields"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:09.520Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"infrastructure data science, machine learning, advanced analytics, cloud environment, statistics, data visualisation tools, data engineering, data modelling, big data technologies, leadership skills, communication skills, bias to action, passion for problem-solving, comfort with ambiguity, MS or Ph.D. in quantitative fields","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":228600,"maxValue":314250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bf116df-95e"},"title":"Application Security Engineer","description":"<p>Job Title: Application Security Engineer</p>\n<p>About the Role: The Application Security team at Anthropic is at the forefront of building security into every phase of the software development lifecycle. As an Application Security Engineer, you will partner closely with software engineers and researchers to ensure that security is a core consideration from initial design through implementation. You will lead threat modeling and secure design reviews to proactively identify and mitigate risks early, and help with continuous risk assessment. You will build tools and systems to support developers shipping code securely, adhering to secure coding best practices.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help secure AI products and internal tools that are introducing industry-novel security risks and pushing established security boundaries</li>\n<li>Lead “shift left” security efforts to build security into the software development lifecycle</li>\n<li>Conduct secure design reviews and threat modeling. Identify and prioritize risks, attack surfaces, and vulnerabilities</li>\n<li>Develop tooling to scale security code reviews and respond to developer questions, including advising developers on remediating vulnerabilities and following secure coding practices</li>\n<li>Manage Anthropic&#39;s vulnerability management program, including integrating data ingestion pipelines, coding logic to prioritize vulnerability fixes, supporting teams remediating vulnerabilities and developing automated systems at scale</li>\n<li>Oversee Anthropic&#39;s bug bounty program. Set scope, validate submissions, perform root cause analysis, coordinate remediation with engineering teams, and award bounties. Cultivate relationships with the ethical hacker community</li>\n<li>Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development</li>\n<li>Develop and document security policies, standards, and playbooks. Conduct security awareness training for engineers</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of hands-on experience in application and infrastructure security, including securing cloud-based and containerized environments</li>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n<li>Lead with empathy, a collaborative spirit, and a learning mindset to work cross-functionally with engineers of all levels to build security into the software development life cycle</li>\n<li>Leverage creative and strategic thinking to reduce risk through secure design and simplicity, not just controls</li>\n<li>Possess broad security knowledge to connect the dots across domains and identify holistic ways to decrease the overall threat surface</li>\n<li>Are keen to distill complex security concepts into clear actions and drive consensus without direct authority</li>\n<li>Embody a proactive mindset to thread security throughout the product lifecycle through activities like threat modeling, secure code review, and education</li>\n<li>Have a strong grasp of offensive security to anticipate risks from an adversary&#39;s perspective, not just check compliance boxes</li>\n<li>Bring experience with modern application stacks, infrastructure, and security tools to implement pragmatic defenses</li>\n<li>Are practiced at collaborating cross-functionally and effectively balancing security requirements with business objectives</li>\n<li>Advocate for security fundamentals like least privilege, defense-in-depth, and eliminating complexity that could sub-linearly scale security through smart design</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Hands-on technical expertise securing complex cloud environments and microservices architectures leveraging technologies like Kubernetes, Docker, and AWS / GCP</li>\n<li>Exposure to offensive security techniques like vulnerability testing, bug bounty, pen testing, and red team exercises</li>\n<li>Familiarity with AI/ML security risks such as prompt injection, data poisoning, model extraction, etc. and mitigations</li>\n<li>Experience building security tools, applications, and automated tools</li>\n<li>Solid foundational knowledge of both software and security engineering principles and are keen to continue learning</li>\n<li>Excellent communication skills, able to distill complex security topics for broad audiences</li>\n<li>Worked and thrived in fast-paced environments, and comfortable navigating ambiguity</li>\n</ul>\n<p>Annual Compensation Range:</p>\n<p>$300,000-$405,000 USD</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p>How to Apply:</p>\n<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to reviewing your application!</p>\n<p>Note:</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bf116df-95e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4502508008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["application security","infrastructure security","cloud-based security","containerized environments","programming languages","Python","Rust","Go","Java","threat modeling","secure design reviews","vulnerability management","bug bounty program","security policies","standards","playbooks","security awareness training"],"x-skills-preferred":["hands-on technical expertise","complex cloud environments","microservices architectures","Kubernetes","Docker","AWS","GCP","offensive security techniques","vulnerability testing","pen testing","red team exercises","AI/ML security risks","prompt injection","data poisoning","model extraction","security tools","applications","automated tools","software engineering principles","communication skills"],"datePosted":"2026-04-18T15:35:09.635Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"application security, infrastructure security, cloud-based security, containerized environments, programming languages, Python, Rust, Go, Java, threat modeling, secure design reviews, vulnerability management, bug bounty program, security policies, standards, playbooks, security awareness training, hands-on technical expertise, complex cloud environments, microservices architectures, Kubernetes, Docker, AWS, GCP, offensive security techniques, vulnerability testing, pen testing, red team exercises, AI/ML security risks, prompt injection, data poisoning, model extraction, security tools, applications, automated tools, software engineering principles, communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bf55fe3-b2b"},"title":"Detection & Response Engineer","description":"<p>We are seeking a skilled and proactive Detection &amp; Response Engineer to join our security team. In this critical role, you will be responsible for detecting, investigating, and responding to security incidents across our cloud-native and AI-focused infrastructure.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Monitor and analyse security alerts and logs to identify potential threats and anomalies</li>\n<li>Develop, implement, and maintain detection rules and correlation logic in our SIEM platform</li>\n<li>Conduct thorough investigations of security incidents, performing root cause analysis and impact assessments</li>\n<li>Lead incident response efforts, coordinating with relevant teams to contain and mitigate threats</li>\n<li>Create and maintain incident response playbooks and runbooks</li>\n<li>Perform regular threat hunting activities to proactively identify potential security risks</li>\n<li>Develop and refine metrics and reporting to track the effectiveness of detection and response capabilities</li>\n<li>Collaborate with other security teams to improve overall security posture and incident handling processes</li>\n<li>Stay current with emerging threats, attack techniques, and defensive strategies in the cloud-native and AI domains</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, or a related field</li>\n<li>3-5 years of experience in security operations, incident response, or a similar role</li>\n<li>Strong understanding of cybersecurity principles, attack techniques, and defensive strategies</li>\n<li>Proficiency in at least one scripting language (e.g., Python, Rust) for automation and tool development</li>\n<li>Experience with SIEM platforms and log analysis tools</li>\n<li>Familiarity with cloud environments (e.g., AWS, GCP, Azure) and their security features</li>\n<li>Knowledge of network protocols, system administration, and common attack vectors</li>\n<li>Strong analytical and problem-solving skills with attention to detail</li>\n<li>Excellent communication skills and ability to work effectively under pressure</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Relevant security certifications (e.g., GCIH, GCIA, SANS)</li>\n<li>Experience with threat intelligence platforms and their integration into detection processes</li>\n<li>Familiarity with AI/ML security implications, particularly those outlined in the OWASP LLM Top 10</li>\n<li>Knowledge of software supply chain security and SBOM analysis</li>\n<li>Experience with containerized environments and Kubernetes security</li>\n<li>Experience in building custom security tools or integrations to enhance detection and response capabilities</li>\n<li>Interest in leveraging AI to improve threat detection and automate response processes</li>\n<li>Contributions to open-source security projects or threat research</li>\n<li>Experience with digital forensics and malware analysis</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$200,000 - $340,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bf55fe3-b2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4559148007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$200,000 - $340,000 USD","x-skills-required":["cybersecurity principles","attack techniques","defensive strategies","scripting language","SIEM platforms","log analysis tools","cloud environments","network protocols","system administration","common attack vectors"],"x-skills-preferred":["relevant security certifications","threat intelligence platforms","AI/ML security implications","software supply chain security","containerized environments","Kubernetes security","custom security tools","digital forensics","malware analysis"],"datePosted":"2026-04-18T15:23:47.430Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity principles, attack techniques, defensive strategies, scripting language, SIEM platforms, log analysis tools, cloud environments, network protocols, system administration, common attack vectors, relevant security certifications, threat intelligence platforms, AI/ML security implications, software supply chain security, containerized environments, Kubernetes security, custom security tools, digital forensics, malware analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":340000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf9dc88-fbe"},"title":"Infrastructure Security Engineer","description":"<p>We are seeking a talented and motivated Cloud/Infrastructure Security Engineer to join our security team.</p>\n<p>In this role, you will design, implement, and maintain secure cloud infrastructure and ensure the integrity of our cloud-native applications.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement secure cloud architectures across multiple cloud platforms (e.g., AWS, GCP, Azure)</li>\n<li>Develop and maintain Infrastructure as Code (IaC) templates with embedded security controls</li>\n<li>Conduct regular security assessments and audits of cloud infrastructure and services</li>\n<li>Implement and manage cloud security tools and services (e.g., CSPM, CWPP, CASB)</li>\n<li>Collaborate with development teams to ensure security best practices are integrated into CI/CD pipelines</li>\n<li>Monitor and respond to security events and incidents in cloud environments</li>\n<li>Develop and maintain cloud security policies, standards, and procedures</li>\n<li>Stay current with emerging cloud security threats and mitigation strategies</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, or a related field</li>\n<li>3-5 years of experience in cloud security or related roles</li>\n<li>Strong understanding of cloud security principles, compliance frameworks, and best practices</li>\n<li>Proficiency in at least one cloud platform (AWS, GCP, or Azure) and associated security services</li>\n<li>Experience with Infrastructure as Code tools (e.g., Terraform, CloudFormation)</li>\n<li>Familiarity with containerization technologies and their security implications</li>\n<li>Knowledge of network security concepts and protocols</li>\n<li>Experience with scripting languages (e.g., Python, Bash) for automation and tool development</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Relevant security certifications (e.g., CCSP, CSSK, AWS Security Specialty)</li>\n<li>Experience with multi-cloud environments and cloud-to-cloud security</li>\n<li>Knowledge of DevSecOps practices and tools</li>\n<li>Experience with Kubernetes and container security</li>\n<li>Experience in building custom cloud security tools or integrations</li>\n<li>Interest in leveraging AI for cloud security monitoring and automation</li>\n<li>Contributions to open-source cloud security projects</li>\n<li>Experience with securing AI/ML workloads in cloud environments</li>\n</ul>\n<p>Compensation and Benefits:</p>\n<p>$200,000 - $340,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf9dc88-fbe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090998007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$200,000 - $340,000 USD","x-skills-required":["Cloud security principles","Compliance frameworks","Best practices","Cloud platform (AWS, GCP, or Azure)","Infrastructure as Code tools (Terraform, CloudFormation)"],"x-skills-preferred":["Relevant security certifications (CCSP, CSSK, AWS Security Specialty)","Multi-cloud environments and cloud-to-cloud security","DevSecOps practices and tools","Kubernetes and container security","Building custom cloud security tools or integrations"],"datePosted":"2026-04-18T15:23:29.833Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud security principles, Compliance frameworks, Best practices, Cloud platform (AWS, GCP, or Azure), Infrastructure as Code tools (Terraform, CloudFormation), Relevant security certifications (CCSP, CSSK, AWS Security Specialty), Multi-cloud environments and cloud-to-cloud security, DevSecOps practices and tools, Kubernetes and container security, Building custom cloud security tools or integrations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":340000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a14470f-8ac"},"title":"Senior Software Engineer – Platform","description":"<p>As a Senior Software Engineer – Platform / Infrastructure, you will join the team responsible for the core infrastructure that enables dozens (and soon hundreds) of microservices to run safely, reliably, and at scale.</p>\n<p>This is not a traditional DevOps or infra-only role. This is a developer-first position, focused on building production-grade software that powers our internal platform, automation, and operational systems.</p>\n<p>You will design, build, and own critical platform services that abstract infrastructure complexity away from product teams while ensuring reliability, scalability, and performance across Yuno&#39;s ecosystem.</p>\n<p><strong>Software Engineering (Core Focus)</strong></p>\n<ul>\n<li>Design, build, and maintain internal platform services and tools using Python and Node.js</li>\n<li>Develop APIs, automation services, CLIs, background workers, and platform control components</li>\n<li>Build tooling that abstracts infrastructure complexity away from product teams</li>\n<li>Write clean, testable, production-grade code powering core platform systems</li>\n</ul>\n<p><strong>Platform &amp; Infrastructure Engineering</strong></p>\n<ul>\n<li>Operate and evolve AWS and Kubernetes environments running critical workloads</li>\n<li>Build and maintain GitOps workflows and deployment strategies (canary, blue/green, progressive delivery)</li>\n<li>Define and manage infrastructure using Terraform</li>\n<li>Contribute to deployment, provisioning, observability, reliability, and security automation systems</li>\n</ul>\n<p><strong>Ownership &amp; Reliability</strong></p>\n<ul>\n<li>Own systems end-to-end, including design, implementation, deployment, and operation</li>\n<li>Participate in production troubleshooting and incident analysis</li>\n<li>Continuously improve platform reliability, performance, and developer experience</li>\n<li>Help define platform standards, best practices, and engineering patterns</li>\n</ul>\n<p><strong>What This Role Is Not</strong></p>\n<ul>\n<li>Not a “click-ops” infrastructure role</li>\n<li>Not a pure YAML or Terraform-only position</li>\n<li>Not a role focused on maintaining existing systems</li>\n<li>This role is about building, coding, automating, and owning critical platform components.</li>\n</ul>\n<p><strong>Skills you need</strong></p>\n<ul>\n<li>Senior experience as a Software Engineer</li>\n<li>Strong experience with Python and Node.js</li>\n<li>Solid understanding of APIs, async systems, and distributed systems</li>\n<li>Experience with Linux and cloud environments (preferably AWS)</li>\n<li>Ability to read and reason about infrastructure code</li>\n<li>Strong debugging skills and production mindset</li>\n</ul>\n<p><strong>Strong Plus</strong></p>\n<ul>\n<li>GCP experience</li>\n<li>Production experience with Kubernetes</li>\n<li>Experience with Terraform or Infrastructure as Code</li>\n<li>Familiarity with CI/CD and GitOps</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with advanced deployment or traffic strategies</li>\n<li>Observability tooling experience (logs, metrics, tracing)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a14470f-8ac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Yuno","sameAs":"https://www.yuno.com/","logo":"https://logos.yubhub.co/yuno.com.png"},"x-apply-url":"https://jobs.lever.co/yuno/690dd658-952d-414e-9476-a5e845b0c453","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Node.js","APIs","async systems","distributed systems","Linux","cloud environments","AWS","infrastructure code","debugging skills"],"x-skills-preferred":["GCP","Kubernetes","Terraform","CI/CD","GitOps"],"datePosted":"2026-04-17T13:11:53.939Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Europe"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Node.js, APIs, async systems, distributed systems, Linux, cloud environments, AWS, infrastructure code, debugging skills, GCP, Kubernetes, Terraform, CI/CD, GitOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c7e46c8-c5c"},"title":"Application Security Intern","description":"<p>We&#39;re looking for a curious and motivated Application Security Intern to help us build secure products and development practices at VGS. As an Application Security Intern, you will partner with security and engineering teams to evaluate application risk, improve secure software development workflows, and help developers ship software safely in an environment that handles highly sensitive payment and identity data.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Supporting application security reviews for services, APIs, and new product features across the VGS platform.</li>\n<li>Helping identify, validate, and track security findings from static analysis, dependency scanning, container scanning, and other security testing tools.</li>\n<li>Participating in threat modeling and secure design discussions with engineering teams during feature development.</li>\n<li>Evaluating the security of AI-enabled development workflows, including internal AI systems integrated into the SDLC.</li>\n<li>Assisting with manual testing and validation of web application and API security issues.</li>\n<li>Helping improve secure SDLC processes by contributing to developer guidance, secure coding resources, and repeatable review checklists.</li>\n<li>Working with engineers to understand remediation options and clearly document security risks and recommendations.</li>\n<li>Contributing to improving security tooling and guardrails in CI/CD and development workflows.</li>\n</ul>\n<p>We&#39;re looking for someone with a strong interest in secure software design, cloud-native architectures, and automation. You should have a foundational understanding of application security concepts, such as the OWASP Top 10, API security, authentication and authorization, secure coding, and common software vulnerabilities.</p>\n<p>At VGS, we have a remote-first philosophy, and we&#39;re looking for someone who is comfortable working independently and collaboratively as part of a team.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c7e46c8-c5c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"VGS","sameAs":"https://www.vgs.com","logo":"https://logos.yubhub.co/vgs.com.png"},"x-apply-url":"https://jobs.lever.co/verygoodsecurity/32fe92a6-13d5-4132-b77c-a7a5ed74f38b","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["application security","secure software development","cloud-native architectures","automation","OWASP Top 10","API security","authentication and authorization","secure coding","common software vulnerabilities"],"x-skills-preferred":["LMMs","threat modeling","Burp Suite","SAST/DAST tools","CI/CD pipelines","Docker/Kubernetes","cloud environments"],"datePosted":"2026-04-17T13:08:01.601Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"jobLocationType":"TELECOMMUTE","employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"application security, secure software development, cloud-native architectures, automation, OWASP Top 10, API security, authentication and authorization, secure coding, common software vulnerabilities, LMMs, threat modeling, Burp Suite, SAST/DAST tools, CI/CD pipelines, Docker/Kubernetes, cloud environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6bed514e-c6d"},"title":"Physical Security Systems Engineer","description":"<p>We are seeking a highly skilled Physical Security Systems Engineer responsible for the design, implementation, integration, and lifecycle management of enterprise physical security technologies. This role will engage in architecture and engineering efforts across video surveillance, access control, intrusion detection, perimeter security, and identity integrations, with a strong emphasis on cloud-managed systems and identity integration.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design and engineer enterprise-wide physical security systems including video surveillance, access control, intrusion detection, license plate recognition, and intercom and visitor management systems</li>\n<li>Architect and manage Security System environments (cameras, access control, sensors, intercoms)</li>\n<li>Support integrations for Single Sign-On, SCIM user provisioning, Role-based access control, and Conditional Access policies</li>\n<li>Collaborate with network engineers to implement QoS policies, ensure proper bandwidth planning, and maintain secure firewall rules</li>\n<li>Understand and support DNS, DHCP, NTP, TLS certificates, secure device enrollment, and troubleshoot Layer 1–Layer 3 network issues affecting security systems</li>\n</ul>\n<p>Qualifications include:</p>\n<ul>\n<li>High school diploma or equivalent required; associate or bachelor&#39;s degree in electrical engineering, computer networking, Information Systems or a related field preferred</li>\n<li>5+ years of experience in physical security systems engineering</li>\n<li>Hands-on experience with cameras and/or access control</li>\n<li>Experience integrating systems with Microsoft Entra ID (Azure AD) or similar environments</li>\n</ul>\n<p>Preferred skills include knowledge of network design and function, electrical systems, and cloud environments, as well as certifications in Lenel, Verkada, CCure, Genetec, and Avigilon.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6bed514e-c6d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Saronic Technologies","sameAs":"https://www.saronictech.com/","logo":"https://logos.yubhub.co/saronictech.com.png"},"x-apply-url":"https://jobs.lever.co/saronic/3b82518b-cd97-4aaf-87bb-e17cc5b760a2","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["physical security systems engineering","video surveillance","access control","intrusion detection","perimeter security","identity integrations","cloud-managed systems","identity integration","network design","electrical systems","cloud environments"],"x-skills-preferred":["Lenel","Verkada","CCure","Genetec","Avigilon"],"datePosted":"2026-04-17T12:57:48.399Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"physical security systems engineering, video surveillance, access control, intrusion detection, perimeter security, identity integrations, cloud-managed systems, identity integration, network design, electrical systems, cloud environments, Lenel, Verkada, CCure, Genetec, Avigilon"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_734a57ad-497"},"title":"Security Engineer","description":"<p>We&#39;re seeking a senior-level Security Engineer to own the design, implementation, and continuous improvement of security guardrails across our cloud infrastructure. You willaki, you&#39;ll build the systems and patterns that enable every team at Saronic to move fast and ship with confidence, with security baked in from the start. You will be the technical authority on how we architect, govern, and defend our AWS environments across commercial and GovCloud.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Own the security architecture for Saronic&#39;s AWS environments, including multi-account strategy, network segmentation, identity architecture, and data protection across commercial AWS and AWS GovCloud</li>\n</ul>\n<ul>\n<li>Design and maintain secure-by-default Terraform modules and IaC standards that teams adopt as the standard path, enforcing least privilege, secure defaults, and compliance requirements</li>\n</ul>\n<ul>\n<li>Implement preventive controls (SCPs, permission boundaries, policy-as-code) and detective controls (Config rules, CloudTrail analysis, GuardDuty) as a unified, layered security model</li>\n</ul>\n<ul>\n<li>Design and enforce IAM patterns across AWS accounts, services, and workloads including least-privilege policies, permission boundaries, cross-account access, federation, and service-to-service authentication</li>\n</ul>\n<ul>\n<li>Implement and govern secrets management using tools such as AWS Secrets Manager or Vault, integrated into CI/CD and runtime environments</li>\n</ul>\n<ul>\n<li>Partner with DevOps and Platform Engineering to embed security into CI/CD pipelines, infrastructure provisioning, and deployment workflows</li>\n</ul>\n<ul>\n<li>Build automated compliance validation into infrastructure pipelines and replace manual security gates with automated guardrails wherever possible</li>\n</ul>\n<ul>\n<li>Create self-service security tooling and patterns that allow teams to operate with speed and autonomy while maintaining compliance</li>\n</ul>\n<ul>\n<li>Integrate logging, monitoring, and alerting across cloud infrastructure to validate control effectiveness and detect misconfigurations or threats</li>\n</ul>\n<ul>\n<li>Build and tune cloud-native detections using CloudTrail, GuardDuty, Config, and SIEM integrations</li>\n</ul>\n<ul>\n<li>Support incident response for cloud security events, drive root-cause analysis, and translate findings into improved guardrails and controls</li>\n</ul>\n<p><strong>Required Qualifications:</strong></p>\n<ul>\n<li>6+ years of hands-on experience in cloud security engineering, infrastructure security, DevSecOps, or a closely related security engineering role</li>\n</ul>\n<ul>\n<li>Expert-level proficiency with Terraform, including module design, state management, policy-as-code, and managing complex multi-environment configurations</li>\n</ul>\n<ul>\n<li>Deep expertise in AWS security services and architecture, including IAM, Organizations, SCPs, Control Tower, CloudTrail, Config, GuardDuty, Security Hub, KMS, and VPC security</li>\n</ul>\n<ul>\n<li>Demonstrated experience building security guardrails and reusable infrastructure patterns that engineering teams adopt without friction</li>\n</ul>\n<ul>\n<li>Strong experience with CI/CD pipeline security, IaC review processes, and automated compliance validation</li>\n</ul>\n<ul>\n<li>Experience operating in AWS GovCloud or FedRAMP-regulated cloud environments</li>\n</ul>\n<ul>\n<li>Strong proficiency in Python, Go, Rust, or equivalent languages for building security automation and tooling</li>\n</ul>\n<ul>\n<li>Ability to obtain and maintain a security clearance</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience in defence, aerospace, robotics, autonomy, or other high-assurance environments</li>\n</ul>\n<ul>\n<li>Experience designing multi-account AWS landing zones and organisational security architectures from the ground up</li>\n</ul>\n<ul>\n<li>Hands-on experience with Kubernetes security, container security, and service mesh security in cloud-native environments</li>\n</ul>\n<ul>\n<li>Familiarity with NIST SP 800-171, NIST SP 800-53, FedRAMP, or Cloud Computing SRG Impact Levels</li>\n</ul>\n<ul>\n<li>Experience with infrastructure drift detection, automated remediation, and continuous compliance monitoring</li>\n</ul>\n<ul>\n<li>Relevant certifications such as AWS Security Specialty, AWS Solutions Architect Professional, HashiCorp Terraform Associate/Engineer, CCSP, or CISSP</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Benefits: Medical Insurance: Comprehensive health insurance plans covering a range of services. Saronic pays 100% of the premium for employees and 80% for dependents. Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care. Saronic pays 100% of the premium under the basic plan for employees and 80% for dependents. Time Off: Generous PTO and Holidays. Parental Leave: Paid maternity and paternity leave to support new parents. Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses. Retirement Plan: 401(k) plan. Stock Options: Equity options to give employees a stake in the company’s success. Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage. Pet Insurance: Discounted pet insurance options including 24/7 Telehealth helpline. Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</p>\n<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_734a57ad-497","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Saronic Technologies","sameAs":"https://www.saronictechnologies.com/","logo":"https://logos.yubhub.co/saronictechnologies.com.png"},"x-apply-url":"https://jobs.lever.co/saronic/18310005-a24b-4f4c-9538-465df614c4fa","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Terraform","AWS security services","IAM","Organizations","SCPs","Control Tower","CloudTrail","Config","GuardDuty","Security Hub","KMS","VPC security","Python","Go","Rust","CI/CD pipeline security","IaC review processes","automated compliance validation","AWS GovCloud","FedRAMP-regulated cloud environments"],"x-skills-preferred":["Kubernetes security","container security","service mesh security","NIST SP 800-171","NIST SP 800-53","FedRAMP","Cloud Computing SRG Impact Levels","infrastructure drift detection","automated remediation","continuous compliance monitoring","AWS Security Specialty","AWS Solutions Architect Professional","HashiCorp Terraform Associate/Engineer","CCSP","CISSP"],"datePosted":"2026-04-17T12:56:38.157Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, AWS security services, IAM, Organizations, SCPs, Control Tower, CloudTrail, Config, GuardDuty, Security Hub, KMS, VPC security, Python, Go, Rust, CI/CD pipeline security, IaC review processes, automated compliance validation, AWS GovCloud, FedRAMP-regulated cloud environments, Kubernetes security, container security, service mesh security, NIST SP 800-171, NIST SP 800-53, FedRAMP, Cloud Computing SRG Impact Levels, infrastructure drift detection, automated remediation, continuous compliance monitoring, AWS Security Specialty, AWS Solutions Architect Professional, HashiCorp Terraform Associate/Engineer, CCSP, CISSP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_baec12df-551"},"title":"Technical Marketing Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a distributed team with offices in France, USA, UK, Germany, and Singapore. We are a low-ego and team-spirited organisation.</p>\n<p>About the Role</p>\n<p>As a Technical Marketing Engineer (TME), you will bridge the gap between Mistral AI&#39;s science/engineering organisations and our marketing teams. You will create technical content to educate enterprise decision-makers, align technical capabilities with business goals, and accelerate sales cycles.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Create and Deliver Technical Content: Develop model/product technical launch materials, technical proof points, presentation decks, demo videos, webinars, workshops, blogs, whitepapers, and sales training materials.</li>\n<li>Enable Sales and Partners: Equip sales teams and partners with technical knowledge to engage in deeper, more credible conversations with technical stakeholders.</li>\n<li>Support Model Launches: Collaborate on model launches, ensuring technical messaging is clear and impactful.</li>\n<li>Engage with Analysts and Industry Leaders: Participate in analyst briefings, technical advisory boards (TABs), and industry standards discussions to position Mistral AI as a thought leader.</li>\n<li>Build Trust and Drive Adoption: Work with solutions architects and developer relations to ensure seamless integration and implementation of our solutions through targeted technical content.</li>\n<li>Interface with Science and Engineering: Act as the technical liaison between engineering, science, and marketing teams, translating complex LLM concepts (including pre-training and post-training techniques) into actionable insights for enterprise audiences.</li>\n</ul>\n<p>Who You Are</p>\n<ul>\n<li>Experience: 3+ years in technical marketing, solutions engineering, or a similar role, preferably in AI/ML or enterprise software.</li>\n<li>Technical Skills: Ability to understand and communicate complex technical concepts to both technical and non-technical audiences. Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement.</li>\n<li>Mindset: Collaborative, creative, and low-ego. Passionate about AI and its potential to transform industries.</li>\n</ul>\n<p>What We Offer</p>\n<ul>\n<li>Competitive cash salary and equity</li>\n<li>Daily lunch vouchers: Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company</li>\n<li>Sport: Enjoy discounted access to gyms and fitness studios through our Wellpass partnership</li>\n<li>Transportation: Monthly contribution to a mobility pass via Betterway</li>\n<li>Health: Full health insurance for you and your family</li>\n<li>Parental: Generous parental leave policy</li>\n<li>Visa sponsorship</li>\n<li>Coaching: we offer BetterUp coaching on a voluntary basis</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_baec12df-551","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/942f8627-3079-416b-a2a7-bf651b336acb","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ability to understand and communicate complex technical concepts to both technical and non-technical audiences","Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement","Large language models (LLMs), including pre-training and post-training techniques","AI/ML or enterprise software","Technical marketing, solutions engineering, or a similar role"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:48:12.511Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ability to understand and communicate complex technical concepts to both technical and non-technical audiences, Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement, Large language models (LLMs), including pre-training and post-training techniques, AI/ML or enterprise software, Technical marketing, solutions engineering, or a similar role"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5242ca9a-088"},"title":"Staff Automation Engineer","description":"<p>We are looking for a Staff Automation Engineer to have a huge impact on the Business Systems, Security, Production Engineering and IT functions. This role is for a seasoned engineer who thrives on solving complex operational challenges, enhancing system security and stability, and improving efficiency through automation and best practices using AI technologies.</p>\n<p>Your day-to-day will involve implementing Agentic AI and LLM-powered workflows using tools like Tines, AWS Agentcore, AWS Bedrock, Claude Code, etc. You will deploy systems with Infrastructure as Code (IaC) (i.e. Terraform) and build and maintain automation workflows across key enterprise platforms (i.e. Atlassian, Okta, Google Workspace, Slack, Zoom, knowledge management systems), cybersecurity systems (i.e. SIEM, GRC platforms, Data Security Platforms, etc.), and cloud environments (AWS, GCP).</p>\n<p>You will build AI-driven chatbots or intelligent agents that automate tasks, support conversational workflows, and integrate with enterprise applications. You will partner with IT, Security, GRC, Procurement, and business teams to automate operational tasks and processes to reduce toil, improve efficiency and enable business.</p>\n<p>You will develop integrations using REST APIs, JSON, webhooks, and scripting languages (JavaScript, Python). You will follow established automation and AI standards for quality, security, and governance; provide improvements where appropriate.</p>\n<p>You will troubleshoot, maintain, and optimize existing workflows to improve stability and performance. You will document designs, workflows, configurations, and operational procedures.</p>\n<p>You will participate in code reviews, technical discussions, and team-based learning to uplift engineering quality and consistency.</p>\n<p>You will work with various tooling in Security, IT, and Production Engineering.</p>\n<p>This role requires 10+ years of experience in automation engineering, systems integration, or workflow development. You should have experience with automation platforms such as Tines, Retool, Superblocks, n8n, etc. You should also have hands-on experience with Terraform and containerization technologies.</p>\n<p>You should have experience developing LLM-powered automations, conversational interfaces, or Agentic AI assistants. You should have knowledge of Git and modern version control practices.</p>\n<p>You should have strong skills in REST APIs, JSON, webhooks, JavaScript, and Python. You should also have familiarity with identity systems (Okta, SCIM) and RBAC concepts.</p>\n<p>You should have familiarity with cloud environments such as Google Cloud Platform (GCP) and Amazon Web Services (AWS).</p>\n<p>You should be able to break down problems, collaborate cross-functionally, and deliver solutions with moderate guidance.</p>\n<p>You should have strong communication skills and the ability to translate functional requirements into technical outputs.</p>\n<p>Preferred experience includes familiarity with data platform and database technologies (e.g., Snowflake, PostgreSQL, Cassandra, DynamoDB).</p>\n<p>Work perks at Greenlight include medical, dental, vision, and HSA match, paid life insurance, AD&amp;D, and disability benefits, traditional 401k with company match, unlimited PTO, paid company holidays and pop-up bonus holidays, professional development stipends, mental health resources, 1:1 financial planners, fertility healthcare, 100% paid parental and caregiving leave, plus cleaning service and meals during your leave, flexible WFH, both remote and in-office opportunities, fully stocked kitchen, catered lunches, and occasional in-office happy hours, employee resource groups.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5242ca9a-088","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/d85a9c34-4434-4f6d-8f01-bccb9521c036","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000-$225,000","x-skills-required":["Agentic AI","LLM-powered workflows","Tines","AWS Agentcore","AWS Bedrock","Claude Code","Infrastructure as Code (IaC)","Terraform","REST APIs","JSON","webhooks","JavaScript","Python","Git","modern version control practices","identity systems","RBAC concepts","cloud environments","Google Cloud Platform (GCP)","Amazon Web Services (AWS)"],"x-skills-preferred":["data platform and database technologies","Snowflake","PostgreSQL","Cassandra","DynamoDB"],"datePosted":"2026-04-17T12:35:33.366Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Agentic AI, LLM-powered workflows, Tines, AWS Agentcore, AWS Bedrock, Claude Code, Infrastructure as Code (IaC), Terraform, REST APIs, JSON, webhooks, JavaScript, Python, Git, modern version control practices, identity systems, RBAC concepts, cloud environments, Google Cloud Platform (GCP), Amazon Web Services (AWS), data platform and database technologies, Snowflake, PostgreSQL, Cassandra, DynamoDB","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd034e01-768"},"title":"Senior Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.\nThe future of work is here, and it&#39;s at Cresta.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>\n<li>Flexible PTO to take the time you need, when you need it.</li>\n<li>Paid parental leave for all new parents welcoming a new child.</li>\n<li>Retirement savings plan to help you plan for the future.</li>\n<li>Remote work setup budget to help you create a productive home office.</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>\n<li>In-office meal program and commuter benefits provided for onsite employees.</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<ul>\n<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>\n<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>\n</ul>\n<p>Salary Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd034e01-768","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5133464008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000 + Offers Equity","x-skills-required":["backend system architecture","cloud services","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:37.299Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52ba7bfb-60e"},"title":"Senior Software Engineer, Backend (AI Agent Quality)","description":"<p>Join us on a mission to revolutionize the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52ba7bfb-60e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4062453008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:52.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3c253ad-38b"},"title":"Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3c253ad-38b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4325729008","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:22.648Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_47a98b2c-1df"},"title":"Jr. Payment Specialist Engineer","description":"<p>About Belong\nWe believe in a world where homes are owned by regular people, not corporations. Our mission is to provide authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom.</p>\n<p>The Role\nBelong is seeking a Junior Backend Engineer with a strong foundation in C# who is eager to grow, learn, and contribute to both backend development and day-to-day production operations. This role is ideal for someone early in their career who wants meaningful ownership, exposure to real production systems, and the opportunity to work across engineering and business operations.</p>\n<p>Responsibilities\nBackend Engineering\nDevelop and maintain backend services and APIs using C#/.NET.\nContribute to new features, enhancements, and bug fixes across our core systems.\nWrite clean, maintainable, tested code with guidance from senior engineers.\nParticipate in code reviews, design discussions, and sprint ceremonies.\nCollaborate with cross-functional partners to understand requirements and deliver improvements.</p>\n<p>Production Support &amp; Operations\nExecute operational workflows such as:\nInitiating and validating homeowner payouts\nSending security deposits\nInvestigating payment failures and resolving root causes\nWorking directly with our providers\nPerforming lease corrections and ensuring data accuracy\nMonitor system health and escalate issues when necessary\nHelp improve internal tools and automation to reduce manual work across teams.\nDocument recurring issues and contribute to long-term fixes.</p>\n<p>AI-Enabled Productivity\nUse AI-driven tools to accelerate development, debugging, testing, and repetitive operational tasks.\nIdentify opportunities to automate manual workflows in partnership with engineering and operations teams.</p>\n<p>What We’re Looking For\n1–3 years of software engineering experience, ideally in backend development.\nSolid understanding of C#, .NET, and RESTful APIs.\nInterest or experience in production operations, support tasks, or QA-like validation work.\nA proactive, detail-oriented mindset with a high sense of ownership.\nAbility to troubleshoot issues across systems and communicate findings clearly.\nWillingness to collaborate with both technical and non-technical teams.\nCuriosity, humility, eagerness to learn, and comfort asking questions.</p>\n<p>Why Belong\nWe’re transforming one of the most broken industries (housing) into something fundamentally better.\nWork with experienced, talented engineers and leaders who love mentoring and helping junior developers grow.\nAI isn’t a side project,it’s embedded across our engineering philosophy and roadmap.\nCompetitive compensation, equity, and benefits.\nA high-trust environment with real ownership, clear growth paths, and meaningful impact.\nIf you’re excited to grow your backend engineering skills while supporting high-impact operational systems that help people love where they live, we’d love to talk. Apply now.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_47a98b2c-1df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Belong","sameAs":"https://www.belong.com/","logo":"https://logos.yubhub.co/belong.com.png"},"x-apply-url":"https://jobs.lever.co/belong/ac82cb72-46b8-4aca-ab83-2b896c515a69","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#",".NET","RESTful APIs","backend development","production operations","support tasks","QA-like validation work"],"x-skills-preferred":["payment systems","financial operations","SQL","distributed systems","cloud environments","Dwolla","AI tools"],"datePosted":"2026-04-17T12:23:34.059Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Argentina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C#, .NET, RESTful APIs, backend development, production operations, support tasks, QA-like validation work, payment systems, financial operations, SQL, distributed systems, cloud environments, Dwolla, AI tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_07ad01b5-1e5"},"title":"Member of Information & Security","description":"<p>At Anchorage Digital, we are looking for a highly skilled Member of Information &amp; Security to join our Global Information &amp; Security Team. As a key member of this team, you will be responsible for helping build and scale a forward-looking security program that ensures the security of our data and our client&#39;s digital assets, meets industry standards, and complies with regulatory requirements.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Conducting cybersecurity risk assessments and designing and implementing key internal controls</li>\n<li>Compiling reporting and metrics to ensure the effectiveness of our security program</li>\n<li>Identifying and evaluating risk to the company&#39;s Information Security Program and creating and improving controls to manage operational risks</li>\n<li>Ensuring these controls continue to perform as expected, without any issues or deviations</li>\n</ul>\n<p>We are looking for someone with expert knowledge and wide-ranging experience with regulatory and industry frameworks/standards/methodologies/technology, including NIST 800-53, NIST Cybersecurity Framework, ISO 27001, SOC 1/2, cloud environments, logical security, change management, and computer operations.</p>\n<p>The ideal candidate will have excellent project management skills, be able to lead and execute key team projects from start to finish, and have a deep understanding of the IT threat landscape for the industry and cloud environments.</p>\n<p>In addition to your technical skills, you should be able to communicate proactively, take ownership in assigned work/projects, and be comfortable asking questions when something is unclear or to further knowledge in a specific area.</p>\n<p>If you are a strong contributor with the ability to significantly contribute to medium-to-large projects and overall Anchorage Digital culture, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_07ad01b5-1e5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/dbc2739f-bbb4-4ae2-a162-2a4990481f15","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["NIST 800-53","NIST Cybersecurity Framework","ISO 27001","SOC 1/2","cloud environments","logical security","change management","computer operations"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:17:37.018Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"NIST 800-53, NIST Cybersecurity Framework, ISO 27001, SOC 1/2, cloud environments, logical security, change management, computer operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0c5a355c-fd9"},"title":"Technical Marketing Engineer","description":"<p>About the Role</p>\n<p>As a Technical Marketing Engineer (TME), you will bridge the gap between Mistral AI&#39;s science/engineering organisations and our marketing teams. You will create technical content to educate enterprise decision-makers, align technical capabilities with business goals, and accelerate sales cycles.</p>\n<p>Your role is critical in simplifying complex technical concepts, building trust, and driving adoption of our AI solutions.</p>\n<p>Responsibilities</p>\n<p>Create and Deliver Technical Content:</p>\n<ul>\n<li>Develop model/product technical launch materials, technical proof points, presentation decks, demo videos, webinars, workshops, blogs, whitepapers, and sales training materials.</li>\n</ul>\n<p>Enable Sales and Partners:</p>\n<ul>\n<li>Equip sales teams and partners with technical knowledge to engage in deeper, more credible conversations with technical stakeholders.</li>\n</ul>\n<p>Support Model Launches:</p>\n<ul>\n<li>Collaborate on model launches, ensuring technical messaging is clear and impactful.</li>\n</ul>\n<p>Engage with Analysts and Industry Leaders:</p>\n<ul>\n<li>Participate in analyst briefings, technical advisory boards (TABs), and industry standards discussions to position Mistral AI as a thought leader.</li>\n</ul>\n<p>Build Trust and Drive Adoption:</p>\n<ul>\n<li>Work with solutions architects and developer relations to ensure seamless integration and implementation of our solutions through targeted technical content.</li>\n</ul>\n<p>Interface with Science and Engineering:</p>\n<ul>\n<li>Act as the technical liaison between engineering, science, and marketing teams, translating complex LLM concepts (including pre-training and post-training techniques) into actionable insights for enterprise audiences.</li>\n</ul>\n<p>Who You Are</p>\n<p>Experience:</p>\n<ul>\n<li><p>3+ years in technical marketing, solutions engineering, or a similar role, preferably in AI/ML or enterprise software.</p>\n</li>\n<li><p>Experience with large language models (LLMs), including pre-training and post-training techniques, is a strong plus.</p>\n</li>\n</ul>\n<p>Technical Skills:</p>\n<ul>\n<li><p>Ability to understand and communicate complex technical concepts to both technical and non-technical audiences.</p>\n</li>\n<li><p>Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement.</p>\n</li>\n<li><p>Strong written and verbal communication skills, with a knack for storytelling and simplifying complexity.</p>\n</li>\n</ul>\n<p>Mindset:</p>\n<ul>\n<li><p>Collaborative, creative, and low-ego.</p>\n</li>\n<li><p>Passionate about AI and its potential to transform industries.</p>\n</li>\n</ul>\n<p>Academic Background:</p>\n<ul>\n<li>Degree in Computer Science, Engineering, or a related technical field is preferred.</li>\n</ul>\n<p>It Would Be Ideal If You:</p>\n<ul>\n<li><p>Have hands-on experience with AI/ML models, especially in a technical marketing or solutions engineering capacity.</p>\n</li>\n<li><p>Are comfortable working in a fast-paced, innovative environment and thrive in cross-functional teams.</p>\n</li>\n<li><p>Have a track record of creating technical content that drives adoption and accelerates sales cycles.</p>\n</li>\n<li><p>Are based in or willing to relocate to Paris, EU.</p>\n</li>\n</ul>\n<p>What We Offer</p>\n<p>FRANCE</p>\n<ul>\n<li><p>Competitive cash salary and equity</p>\n</li>\n<li><p>Daily lunch vouchers: Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company</p>\n</li>\n<li><p>Sport: Enjoy discounted access to gyms and fitness studios through our Wellpass partnership</p>\n</li>\n<li><p>Transportation: Monthly contribution to a mobility pass via Betterway</p>\n</li>\n<li><p>Health: Full health insurance for you and your family</p>\n</li>\n<li><p>Parental: Generous parental leave policy</p>\n</li>\n<li><p>Visa sponsorship</p>\n</li>\n</ul>\n<p>UK</p>\n<ul>\n<li><p>Competitive cash salary and equity</p>\n</li>\n<li><p>Health Insurance</p>\n</li>\n<li><p>Transportation: Reimburse office parking charges, or 90GBP/month for public transport</p>\n</li>\n<li><p>Sport: 90GBP/month allowance for gym membership</p>\n</li>\n<li><p>Food: £200 monthly allowance (solution might evolve as we grow bigger)</p>\n</li>\n<li><p>Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</p>\n</li>\n<li><p>Parental: Generous parental leave policy</p>\n</li>\n<li><p>Visa sponsorship</p>\n</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0c5a355c-fd9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai"},"x-apply-url":"https://jobs.lever.co/mistral/942f8627-3079-416b-a2a7-bf651b336acb","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ability to understand and communicate complex technical concepts to both technical and non-technical audiences","Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement","Strong written and verbal communication skills, with a knack for storytelling and simplifying complexity","Collaborative, creative, and low-ego","Passionate about AI and its potential to transform industries"],"x-skills-preferred":["Hands-on experience with AI/ML models, especially in a technical marketing or solutions engineering capacity","Comfortable working in a fast-paced, innovative environment and thriving in cross-functional teams","Track record of creating technical content that drives adoption and accelerates sales cycles"],"datePosted":"2026-03-10T11:35:14.840Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ability to understand and communicate complex technical concepts to both technical and non-technical audiences, Familiarity with enterprise AI solutions, cloud environments, and technical sales enablement, Strong written and verbal communication skills, with a knack for storytelling and simplifying complexity, Collaborative, creative, and low-ego, Passionate about AI and its potential to transform industries, Hands-on experience with AI/ML models, especially in a technical marketing or solutions engineering capacity, Comfortable working in a fast-paced, innovative environment and thriving in cross-functional teams, Track record of creating technical content that drives adoption and accelerates sales cycles"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_13998cbe-159"},"title":"Data Center Operations Manager","description":"<p>About the Role\nMistral AI is seeking a Data Center Operations Manager to lead the build and run operations of our new data center in Borlänge, Sweden. As the first hire for this site, you will be responsible for establishing operational excellence, managing local teams, and ensuring the reliability, security, and efficiency of our AI infrastructure.</p>\n<p>Key Responsibilities\n• Lead the operational management of Mistral’s data center in Borlänge, overseeing build-out, day-to-day operations, and scalability to support our AI infrastructure.\n• Hire and manage a local team of hardware engineers to support operations, maintenance, and troubleshooting.\n• Oversee hardware deployment, upgrades, and decommissioning, ensuring alignment with Mistral’s infrastructure goals.\n• Monitor and enforce Service Level Agreements (SLAs) with data center providers and subcontractors.\n• Manage incidents and request tickets, ensuring timely resolution and clear communication with stakeholders.\n• Ensure adherence to security protocols, contractual obligations, and regulatory requirements at the data center.\n• Provide regular updates to internal teams and external partners on operational status, risks, and improvements.\n• Establish processes and best practices for data center operations, ensuring high availability and performance.\n• Manage local contracts with DC providers and OEM.</p>\n<p>Qualifications &amp; Experience\n• Degree in Computer Science, Electrical/Mechanical Engineering, or related field, or equivalent experience, with a strong understanding of data center technical requirements and operations.\n• Proven track record in data center operations, hardware management, or infrastructure support, preferably in HPC or cloud environments.\n• Proven experience in recruiting, mentoring, and scaling a technical operations team from a greenfield deployment.\n• Experience managing large-scale infrastructure projects, including build-outs, migrations, or upgrades.\n• Strong ability to coordinate and lead vendors, contractors, and internal teams, including review, escalation, and contractual engagement.\n• Comfortable with contract negotiation and management.\n• Hands-on troubleshooting skills and ability to work and lead in critical situations and aggressive timelines.\n• Knowledge of data center security standards (physical and digital) and compliance requirements.\n• Language Skills: Fluency in English and Swedish is a plus.</p>\n<p>Why Join Mistral?\n• Impact: Play a pivotal role in scaling Mistral’s cutting-edge AI infrastructure.\n• Growth: Opportunity to shape data center operations from the ground up in a high-growth startup environment.\n• Collaboration: Work with a talented, cross-functional team passionate about AI and technology.\n• Flexibility: Competitive compensation, benefits, and the chance to contribute to revolutionary projects.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_13998cbe-159","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai"},"x-apply-url":"https://jobs.lever.co/mistral/fa170722-b93a-49f5-a649-3fc731c57a71","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data center operations","hardware management","infrastructure support","HPC or cloud environments","recruiting","mentoring","scaling a technical operations team","large-scale infrastructure projects","contract negotiation","hands-on troubleshooting","data center security standards"],"x-skills-preferred":[],"datePosted":"2026-03-10T11:25:02.423Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Borlänge"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center operations, hardware management, infrastructure support, HPC or cloud environments, recruiting, mentoring, scaling a technical operations team, large-scale infrastructure projects, contract negotiation, hands-on troubleshooting, data center security standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_db36c2fb-68e"},"title":"FBS Infrastructure Service Delivery Specialist","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Implement and enforce IT policies and procedures.</li>\n<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>\n<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>\n<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>\n<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>\n<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>\n<li>Perform other job-related duties as assigned.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>\n<li>General understanding of Cybersecurity Frameworks.</li>\n<li>Familiarity with Disaster Recovery concepts.</li>\n<li>Excellent project management and organizational skills.</li>\n<li>Data Visualization and Power App experience a plus.</li>\n</ul>\n<p>Benefits:</p>\n<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_db36c2fb-68e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/8fcbMVw1ywr5wqBAciKpgi/remote-fbs-infrastructure-service-delivery-specialist-in-india-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["IT policies and procedures","Cloud environments","Cybersecurity Frameworks","Disaster Recovery concepts","Project management","Data Visualization","Power App"],"x-skills-preferred":["Data Visualization","Power App"],"datePosted":"2026-03-09T17:01:05.029Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a29ae7fb-64f"},"title":"FBS Infrastructure Service Delivery Specialist","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Implement and enforce IT policies and procedures.</li>\n<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>\n<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>\n<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>\n<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>\n<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>\n<li>Perform other job-related duties as assigned.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>\n<li>General understanding of Cybersecurity Frameworks.</li>\n<li>Familiarity with Disaster Recovery concepts.</li>\n<li>Excellent project management and organizational skills.</li>\n<li>Data Visualization and Power App experience a plus.</li>\n</ul>\n<p>Benefits:</p>\n<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a29ae7fb-64f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/7Wvx8rf9EmbFu5L7n3Y9cU/remote-fbs-infrastructure-service-delivery-specialist-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["IT policies and procedures","Cloud environments","Cybersecurity Frameworks","Disaster Recovery concepts","Project management","Data Visualization","Power App"],"x-skills-preferred":["Data Visualization","Power App"],"datePosted":"2026-03-09T16:59:10.830Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f30f7bb-777"},"title":"FBS Infrastructure Service Delivery Specialist","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results. We&#39;ve partnered with Capgemini, which acts as the Employer of Record, managing local payroll and benefits.</p>\n<p>As an FBS Infrastructure Service Delivery Specialist, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance. You will also organize Disaster Recovery Tests, create and maintain DR documentation, and work alongside internal testers, auditors, and external parties in support of Audit and Compliance. Additionally, you will assist with remediation efforts for non-compliant infrastructure requirements and perform other job-related duties as assigned.</p>\n<p>We are looking for a candidate with 3+ years of experience within IT, preferably in Infrastructure, operations, audit, or compliance. You should have a general understanding of Cybersecurity Frameworks, familiarity with Disaster Recovery concepts, and excellent project management and organizational skills. Data Visualization and Power App experience is a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f30f7bb-777","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/tET76WcgajZKBGLCXhxTFj/remote-fbs-infrastructure-service-delivery-specialist-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["IT policies and procedures","Cloud environments","Cybersecurity Frameworks","Disaster Recovery concepts","Project management","Organizational skills"],"x-skills-preferred":["Data Visualization","Power App"],"datePosted":"2026-03-09T16:55:43.791Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Organizational skills, Data Visualization, Power App"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4a404fe-902"},"title":"SAP Security/GRC Senior Consultant","description":"<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p>As a SAP Security/GRC Consultant, you will work closely with diverse clients to assess their SAP security risks, design and implement tailored SAP Security and Governance, Risk &amp; Compliance (GRC) solutions, and drive successful project delivery. You will act as a trusted advisor, helping clients align SAP security frameworks with business objectives and compliance mandates.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead SAP Security and GRC assessment, design, and implementation projects for clients across industries.</li>\n<li>Conduct client workshops and requirements gathering sessions to understand business and security needs.</li>\n<li>Design and configure SAP security roles, authorizations, and GRC Access Control components (Access Risk Analysis, Emergency Access Management, Access Request Management).</li>\n<li>Develop and enforce Segregation of Duties (SoD) policies to mitigate risks and ensure compliance.</li>\n<li>Deliver SAP Security and GRC gap analysis, risk assessments, and remediation plans.</li>\n<li>Support clients during audits by preparing documentation, reports, and facilitating access reviews.</li>\n<li>Collaborate with cross-functional teams including Basis, functional consultants, and IT auditors to implement secure SAP landscapes.</li>\n<li>Conduct end-user training sessions and knowledge transfer workshops.</li>\n<li>Stay abreast of SAP security trends, new releases, and regulatory changes to provide proactive consulting.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5-8 years of consulting experience is necessary.</li>\n<li>3+ years of SAP Security and GRC consulting experience with multiple end-to-end implementations.</li>\n<li>Hands-on expertise with SAP ECC and/or S/4HANA Security.</li>\n<li>Strong experience configuring SAP GRC Access Control modules (Access Risk Analysis, Emergency Access Management, Access Request Management).</li>\n<li>Excellent client-facing and communication skills with the ability to explain technical concepts to non-technical stakeholders.</li>\n<li>Proven track record of managing multiple client engagements and delivering quality results on time.</li>\n</ul>\n<ul>\n<li>Functional / Content Skills</li>\n</ul>\n<ul>\n<li>Strong knowledge of Sarbanes-Oxley (SOX) , Business Process controls, IT General Controls and IT governance.</li>\n<li>Deep understanding and practical experience Analysis and Design/Re-Design of Business process and IT General controls in SAP and Non-SAP landscape.</li>\n<li>Strong analytical skills and a deep understanding of the overall context of underlying business processes and technologies.</li>\n<li>Understanding the purpose, procedures and ways of work of internal/external audits.</li>\n<li>Ability to support audits and to provide the right information &amp; data, and to mitigate and/or solve identified deficiencies and gaps.</li>\n</ul>\n<ul>\n<li>Technical Skills (Data, Technology, Implementation)</li>\n</ul>\n<ul>\n<li>Ability to retrieve and analyze and report/present data from various sources.</li>\n<li>Understanding of data structures, sources, flow and integration across infrastructure platforms, functional domains, and application landscapes/service.</li>\n<li>Up-to-date understanding of Concepts &amp; Integration of Cloud Services, and multi-cloud environments</li>\n</ul>\n<ul>\n<li>Tool Skill Requirements</li>\n</ul>\n<ul>\n<li>A variety of ERP systems (SAP &amp; Non-SAP), Operating systems, Databases and financial applications</li>\n<li>Identity and Access Management solutions and monitoring solutions such as Splunk, Qualys, Tripwire, but also in Authorization &amp; SoD</li>\n<li>Analytics &amp; reporting in area of ITGC/GRC</li>\n<li>IT Service Management Tools, Market Leader (SNOW, BMC, JIRA, ..)</li>\n</ul>\n<ul>\n<li>Experience with SAP Identity Management (IdM).</li>\n</ul>\n<ul>\n<li>Knowledge of cloud-based SAP security and hybrid environments.</li>\n<li>Experience working in Agile/Scrum environments.</li>\n<li>Experience in global delivery and working with offshore resources.</li>\n<li>Project-related mobility/willingness to travel</li>\n</ul>\n<ul>\n<li>Qualifications and certifications</li>\n</ul>\n<ul>\n<li>Bachelor’s degree in Computer Science, Information Technology, or related field.</li>\n<li>More than 7 years of experience in Financial / IT compliance, risk management, IT audit and/or IT controls; strong experience in an audit firm (e.g. Big Four).</li>\n<li>SAP Security or GRC certifications are a plus (e.g., SAP Certified Technology Associate – SAP Access Control).</li>\n</ul>\n<p>_Given that this is just a short snapshot of the role we encourage you to apply even if you don&#39;t meet all the requirements listed above. We are looking for team members who strive to make an impact and are eager to learn. If this sounds like you and you feel you have the skills and experience required, then please_ _<strong>apply now.</strong>_</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4a404fe-902","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/caqAF5TaE7H7j3KrqmrMAp/remote-sap-security%2Fgrc-senior-consultant-role-in-united-kingdom-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SAP Security","SAP GRC","SAP ECC","SAP S/4HANA Security","SAP GRC Access Control","Sarbanes-Oxley (SOX)","Business Process controls","IT General Controls","IT governance","Analysis and Design/Re-Design of Business process and IT General controls","Strong analytical skills","Understanding of data structures","Understanding of data sources","Understanding of data flow and integration","Up-to-date understanding of Concepts & Integration of Cloud Services","Multi-cloud environments","Identity and Access Management solutions","Monitoring solutions","Authorization & SoD","Analytics & reporting in area of ITGC/GRC","IT Service Management Tools","SAP Identity Management (IdM)","Cloud-based SAP security and hybrid environments","Agile/Scrum environments","Global delivery and working with offshore resources","Project-related mobility/willingness to travel"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:55:22.131Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"SAP Security, SAP GRC, SAP ECC, SAP S/4HANA Security, SAP GRC Access Control, Sarbanes-Oxley (SOX), Business Process controls, IT General Controls, IT governance, Analysis and Design/Re-Design of Business process and IT General controls, Strong analytical skills, Understanding of data structures, Understanding of data sources, Understanding of data flow and integration, Up-to-date understanding of Concepts & Integration of Cloud Services, Multi-cloud environments, Identity and Access Management solutions, Monitoring solutions, Authorization & SoD, Analytics & reporting in area of ITGC/GRC, IT Service Management Tools, SAP Identity Management (IdM), Cloud-based SAP security and hybrid environments, Agile/Scrum environments, Global delivery and working with offshore resources, Project-related mobility/willingness to travel"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_817bc7b0-6a7"},"title":"Test Engineer (Product Integration)","description":"<p>Our Product Integration Test Team is looking for 2 Test Engineers (an Intermediate and a Senior) to expand the breadth and depth of testing Vista&#39;s cutting-edge software, across multiple technologies.</p>\n<p>This is a unique role that bridges the gap between QA and DevOps. You&#39;ll help:</p>\n<ul>\n<li>Coordinate and support teams performing system test functions</li>\n<li>Design and execute test solutions that cut across squads and technologies</li>\n<li>Create and execute Integration Test Plans</li>\n<li>Contribute to Integration test automation suites</li>\n<li>Execute cloud-based multi-server deployments using tools like Octopus</li>\n<li>Monitor product observability using tools such as DataDog</li>\n<li>Perform static analysis to identify risks in missions</li>\n</ul>\n<p>About you</p>\n<ul>\n<li>A quality champion with strong ownership</li>\n<li>Expertise in manual and automation testing across multiple technologies</li>\n<li>Strong communication skills</li>\n<li>A collaborative mindset, able to build and maintain good professional relationships across the company</li>\n<li>Proven experience with automation tools, including exposure to C#/.NET, Selenium, etc...</li>\n<li>Basic understanding of SQL</li>\n<li>Knowledge of defect tracking and test management systems</li>\n<li>Familiarity with observability tooling such as Prometheus, DataDog, etc...</li>\n<li>Basic project management skills</li>\n<li>Exposure to DevOps in a cloud environment will be advantageous</li>\n<li>Curiosity, passion, and energy</li>\n</ul>\n<p>This is a hybrid role with a home / office-based split, requiring regular (1-2 days per week) attendance in the Cape Town office. We are only considering applicants with an existing right to work in South Africa, without the need for employer sponsorship.</p>\n<p>Benefits</p>\n<ul>\n<li>Rest &amp; Relax Fridays</li>\n<li>Finish at lunch time on Friday but get paid for the full day</li>\n<li>Annual volunteer day</li>\n<li>Employee Rewards and Benefits with Perkbox</li>\n<li>Compulsory Defined Contribution Company Pension Scheme</li>\n<li>Medical Insurance / Medical Aid (after qualifying period)</li>\n<li>Employee Assistance Programme Service</li>\n<li>Paid Sick leave</li>\n<li>5 days bereavement leave per year</li>\n<li>On-Site Breakfast Bar</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_817bc7b0-6a7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Vista","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/532E13FABC","x-work-arrangement":"hybrid","x-experience-level":"mid|senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["manual and automation testing","C#/.NET","Selenium","SQL","defect tracking and test management systems","observability tooling","project management"],"x-skills-preferred":["DevOps in a cloud environment"],"datePosted":"2026-03-09T16:20:02.732Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cape Town"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"manual and automation testing, C#/.NET, Selenium, SQL, defect tracking and test management systems, observability tooling, project management, DevOps in a cloud environment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_37c2e2de-235"},"title":"Software Engineer- III","description":"<p><strong>Software Engineer- III</strong></p>\n<p><strong>Job Summary</strong></p>\n<p>As a Software Engineer- III at Electronic Arts, you will lead the end-to-end architecture, design, and implementation of scalable, high-throughput live service platform components that power multiple EA game studios. You will partner with cross-functional teams to streamline and evolve the live services workflow, evaluate and define how EA&#39;s live service platforms, studio technology stacks, and third-party/vendor solutions integrate to meet engineering and business objectives in a scalable and cost-effective manner.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead the end-to-end architecture, design, and implementation of scalable, high-throughput live service platform components that power multiple EA game studios.</li>\n<li>Partner with cross-functional teams including Content Management &amp; Delivery, Messaging, Segmentation, Recommendation, and Experimentation to streamline and evolve the live services workflow.</li>\n<li>Evaluate and define how EA&#39;s live service platforms, studio technology stacks, and third-party/vendor solutions integrate to meet engineering and business objectives in a scalable and cost-effective manner.</li>\n<li>Own technical design reviews and drive architectural decisions, ensuring solutions are resilient, extensible, secure, and aligned with long-term platform strategy.</li>\n<li>Use large-scale datasets across 20+ game studios to promote data-driven decision-making, experimentation, and continuous optimization.</li>\n<li>Engage with Game Studios, Experience, and Brand organizations to deeply understand use cases, translate business requirements into technical designs, and drive end-to-end solution delivery.</li>\n<li>Collaborate closely with Product Management to prioritize initiatives, define measurable outcomes, and deliver solutions with clear ROI.</li>\n<li>Partner with Program Management to define sprint goals, plan and prioritize work, and own the team&#39;s sprint commitments and delivery outcomes.</li>\n<li>Partner with Legal and Privacy teams to ensure compliance with global regulatory requirements and data governance standards.</li>\n<li>Lead and mentor engineers, providing technical direction, conducting design/code reviews, and fostering engineering excellence.</li>\n<li>Drive stakeholder alignment across multiple teams, locations, and time zones by communicating architecture, trade-offs, risks, and execution plans clearly and effectively.</li>\n<li>Ensure operational excellence for 24/7 live services through proactive monitoring, performance tuning, capacity planning, and incident management.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>\n<li>7-9 years of relevant industry experience in designing and building scalable distributed systems.</li>\n<li>Strong expertise in software design principles, algorithms, and data structures.</li>\n<li>Proven architectural and system design experience, including hands-on ownership of highly scalable, high-throughput, low-latency systems.</li>\n<li>Demonstrated experience leading high-performing engineering teams (2-3+ years), including mentoring, technical guidance, and driving delivery.</li>\n<li>Strong stakeholder management skills, with experience collaborating across product, engineering, legal, and business teams.</li>\n<li>Proficiency in Java and at least one scripting language (preferably Python).</li>\n<li>Hands-on experience with backend frameworks and technologies (e.g., Spring Boot).</li>\n<li>Experience designing and operating distributed systems using messaging and streaming platforms (e.g., Kafka).</li>\n<li>Strong experience with large-scale data pipelines, personalization platforms, analytics systems, and experimentation frameworks.</li>\n<li>Experience with relational, columnar, and/or document-oriented databases.</li>\n<li>Experience managing high-traffic, 24/7 production systems with complex dependencies in cloud environments, preferably AWS.</li>\n<li>Solid understanding of multi-cloud architectures and large-scale data processing systems.</li>\n<li>Working knowledge of containerization and orchestration technologies (Docker, Kubernetes).</li>\n<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana).</li>\n<li>Experience with CI/CD pipelines and version control systems (e.g., GitLab CI/CD).</li>\n<li>Familiarity with modern software development best practices, including clean code principles, automated testing, CI/CD, and DevOps practices.</li>\n<li>Exposure to frontend technologies (HTML, CSS, JavaScript frameworks such as React) is a plus.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Electronic Arts offers a comprehensive benefits package that includes healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_37c2e2de-235","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/212957","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Python","Spring Boot","Kafka","Distributed systems","Software design principles","Algorithms","Data structures","Architectural and system design","Cloud environments","Multi-cloud architectures","Containerization and orchestration technologies","Observability and monitoring tools","CI/CD pipelines","Version control systems"],"x-skills-preferred":["Frontend technologies","JavaScript frameworks","React"],"datePosted":"2026-03-09T11:06:02.877Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad, Telangana, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Python, Spring Boot, Kafka, Distributed systems, Software design principles, Algorithms, Data structures, Architectural and system design, Cloud environments, Multi-cloud architectures, Containerization and orchestration technologies, Observability and monitoring tools, CI/CD pipelines, Version control systems, Frontend technologies, JavaScript frameworks, React"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_50359ecc-208"},"title":"Validation / Verification Engineer, Staff Engineer (Meshing, Python, AI)","description":"<p><strong>Engineer the Future with Us</strong></p>\n<p>We are seeking a skilled Validation / Verification Engineer to join our team at Synopsys. As a Staff Engineer, you will be responsible for conducting rigorous testing of Ansys applications, including Meshing in Workbench, Fluent-Meshing, and meshing capabilities for the Python ecosystem.</p>\n<p><strong>What You&#39;ll Be Doing:</strong></p>\n<ul>\n<li>Conduct thorough testing of Ansys applications, including Meshing in Workbench, Fluent-Meshing, and meshing capabilities for the Python ecosystem.</li>\n<li>Analyze product requirements and develop comprehensive test plans for new features.</li>\n<li>Design, specify, and write new test cases; modify and update existing tests and maintain test scripts for automation and manual environments.</li>\n<li>Perform functional, application, regression, and performance testing in both manual and automated test environments.</li>\n<li>Lead testing initiatives by providing coverage analysis, testing metrics, and participating actively in defect management processes.</li>\n<li>Identify and investigate product defects, collaborating with developers to resolve issues efficiently.</li>\n<li>Operate across diverse computing environments, including Windows, Linux, virtual machines, compute clusters, and cloud infrastructure.</li>\n</ul>\n<p><strong>The Impact You Will Have:</strong></p>\n<ul>\n<li>Enhance product reliability and performance, directly influencing customer satisfaction and trust.</li>\n<li>Drive continuous improvement in testing methodologies, ensuring robust coverage and high-quality releases.</li>\n<li>Accelerate validation workflows through automation and advanced scripting, contributing to faster and more efficient development cycles.</li>\n<li>Facilitate seamless integration of new features by proactively identifying and resolving issues.</li>\n<li>Support Synopsys&#39; commitment to innovation by ensuring our products consistently meet industry-leading standards.</li>\n<li>Empower cross-functional teams with actionable insights from testing metrics and defect analysis.</li>\n<li>Champion the voice of the customer, ensuring that usability and functionality align with real-world needs.</li>\n</ul>\n<p><strong>What You&#39;ll Need:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Mechanical, Civil, or Aerospace Engineering and 5 years related experience, or Master&#39;s degree with 3 years related experience.</li>\n<li>Expertise in meshing technologies for implicit and explicit finite element analysis (FEA) and/or Computational Fluid Dynamics (CFD).</li>\n<li>Advanced proficiency in programming and scripting languages, especially Python.</li>\n<li>Experience with software development, testing processes, and automation tools; proven use of GitHub Copilot to streamline test script development.</li>\n<li>Strong familiarity with Windows and Linux operating systems, including virtualized and cloud environments.</li>\n<li>Commercial experience with meshing and solver products (e.g., Workbench Meshing, Fluent-Meshing, Fluent, CFX, Ansys Mechanical, Ansys LS-Dyna) is highly desirable.</li>\n</ul>\n<p><strong>Who You Are:</strong></p>\n<ul>\n<li>Meticulous and thorough, with a keen eye for detail and a passion for quality assurance.</li>\n<li>Excellent communicator and collaborator, able to build strong relationships across teams and disciplines.</li>\n<li>Quick learner, adaptable, and able to thrive in dynamic, geographically distributed teams.</li>\n<li>Problem solver with strong planning skills and a proactive approach to overcoming challenges.</li>\n<li>Genuine enthusiasm for delivering high-quality, reliable software to customers.</li>\n</ul>\n<p><strong>The Team You&#39;ll Be A Part Of:</strong></p>\n<p>You will join the Meshing Development Unit (MDU), a dynamic team dedicated to advancing the application capabilities. The team focuses on developing and validating cutting-edge meshing technologies, collaborating closely with product creation, development, and support specialists. Together, you&#39;ll ensure the products meet the evolving needs of customers in engineering and simulation fields, driving innovation and excellence in software quality.</p>\n<p><strong>Rewards and Benefits:</strong></p>\n<p>We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_50359ecc-208","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/pune/validation-verification-eng-staff-engineer-meshing-python-ai/44408/91911523936","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["meshing technologies","implicit and explicit finite element analysis (FEA)","computational fluid dynamics (CFD)","python","software development","testing processes","automation tools","github copilot","windows","linux","virtualized and cloud environments","meshing and solver products"],"x-skills-preferred":["github copilot","windows","linux"],"datePosted":"2026-03-09T11:05:39.327Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, Maharashtra, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"meshing technologies, implicit and explicit finite element analysis (FEA), computational fluid dynamics (CFD), python, software development, testing processes, automation tools, github copilot, windows, linux, virtualized and cloud environments, meshing and solver products, github copilot, windows, linux"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5c21d5d-a12"},"title":"Senior Data Scientist","description":"<p>Your job is to design, develop, and deploy end-to-end GenAI solutions, integrating AI into existing systems, applications, and business processes. You will implement LLMOps practices, including Docker containerization, CI/CD pipelines, and versioning strategies. Ensure monitoring, observability, cost optimization, and rollback mechanisms for production AI services. Define and execute evaluation frameworks, apply security, compliance, and governance guidelines for GenAI implementations. Collaborate with stakeholders and contribute to AI delivery standards and onboarding practices.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and deploy end-to-end GenAI solutions (RAG, AI agents, agentic workflows, prompt engineering).</li>\n<li>Integrate AI solutions into existing systems, applications, and business processes.</li>\n<li>Implement LLMOps practices, including Docker containerization, CI/CD pipelines, and versioning strategies.</li>\n<li>Ensure monitoring, observability, cost optimization, and rollback mechanisms for production AI services.</li>\n<li>Define and execute evaluation frameworks (hallucination metrics, A/B testing, offline/online validation).</li>\n<li>Apply security, compliance, and governance guidelines for GenAI implementations.</li>\n<li>Collaborate with stakeholders and contribute to AI delivery standards and onboarding practices.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Master’s degree in Computer Science, Software Engineering, Data Engineering, or a related field.</li>\n<li>Very strong expertise in Python and software engineering (APIs, testing, code reviews).</li>\n<li>Practical experience with RAG architectures, vector databases, and agentic AI workflows.</li>\n<li>Hands-on experience deploying production-grade AI services.</li>\n<li>Solid knowledge of Docker and CI/CD pipelines.</li>\n<li>Understanding of ML fundamentals, evaluation concepts, and LLM behavior.</li>\n<li>Familiarity with cloud environments (preferably Azure) and distributed systems.</li>\n<li>Strong analytical and problem-solving skills.</li>\n<li>Very good level of English.</li>\n<li>Autonomous, reliable, and team-oriented mindset.</li>\n</ul>\n<p>What you will get:</p>\n<ul>\n<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>\n<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>\n<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>\n<li>A team where your input and technical decisions truly matter.</li>\n<li>A competitive package and benefits.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5c21d5d-a12","directApply":true,"hiringOrganization":{"@type":"Organization","name":"AVL Maroc SARL AU","sameAs":"https://jobs.avl.com","logo":"https://logos.yubhub.co/jobs.avl.com.png"},"x-apply-url":"https://jobs.avl.com/job/Sala-Al-Jadida-Senior-Data-Scientist/1366650233/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"permanent","x-salary-range":null,"x-skills-required":["Python","software engineering","RAG architectures","vector databases","agentic AI workflows","Docker","CI/CD pipelines","ML fundamentals","evaluation concepts","LLM behavior","cloud environments","distributed systems"],"x-skills-preferred":[],"datePosted":"2026-03-09T09:29:01.046Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sala Al Jadida"}},"occupationalCategory":"Engineering","industry":"Automotive","skills":"Python, software engineering, RAG architectures, vector databases, agentic AI workflows, Docker, CI/CD pipelines, ML fundamentals, evaluation concepts, LLM behavior, cloud environments, distributed systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_723d3153-72d"},"title":"Security Engineer, Detection & Response","description":"<p><strong>About the role</strong></p>\n<p>At Anthropic, we are pioneering new frontiers in AI that have the potential to greatly benefit society. However, developing advanced AI also comes with risks if not properly safeguarded. That&#39;s why we are seeking an exceptional Detection and Response engineer that will be on the frontlines to build solutions to monitor for threats, rapidly investigate incidents, and coordinate response efforts with other teams. In this role, you will have the opportunity to shape our security capabilities from the ground up alongside our world-class research and security teams.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead cybersecurity Incident Response efforts covering diverse domains from external attacks to insider threats involving all layers of Anthropic’s technology stack</li>\n<li>Develop and deploy novel tooling that may leverage Large Language Models to enhance detection, investigation, and response capabilities</li>\n<li>Create and optimise detections, playbooks, and workflows to quickly identify and respond to potential incidents</li>\n<li>Review Incident Response metrics and procedures and drive continuous improvement</li>\n<li>Work cross functionally with other security and engineering teams</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>3+ years of software engineering experience, with security experience a plus and/or</li>\n<li>5+ years of detection engineering, incident response, or threat hunting experience</li>\n<li>A solid understanding of cloud environments and operations</li>\n<li>Experience working with engineering teams in a SaaS environment</li>\n<li>Exceptional communication and collaboration skills</li>\n<li>An ability to lead projects with little guidance</li>\n<li>The ability to pick up new languages and technologies quickly</li>\n<li>Experience handling security incidents and investigating anomalies as part of a team</li>\n<li>Knowledge of EDR, SIEM, SOAR, or related security tools</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>Experience performing security operations or investigations involving large-scale Kubernetes environments</li>\n<li>A high level of proficiency in Python and query languages such as SQL</li>\n<li>Experience analysing attack behaviour and prototyping high-quality detections</li>\n<li>Experience with threat intelligence, malware analysis, infrastructure as code, detection engineering, or forensics</li>\n<li>Experience contributing to a high growth startup environment</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_723d3153-72d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4982193008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000 - $405,000 USD","x-skills-required":["software engineering","security experience","detection engineering","incident response","threat hunting","cloud environments","operations","engineering teams","SaaS environment","communication skills","project leadership","new languages and technologies","security incidents","anomalies","EDR","SIEM","SOAR","security tools"],"x-skills-preferred":["Python","SQL","threat intelligence","malware analysis","infrastructure as code","detection engineering","forensics","Kubernetes environments","high growth startup environment"],"datePosted":"2026-03-08T13:58:41.409Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, security experience, detection engineering, incident response, threat hunting, cloud environments, operations, engineering teams, SaaS environment, communication skills, project leadership, new languages and technologies, security incidents, anomalies, EDR, SIEM, SOAR, security tools, Python, SQL, threat intelligence, malware analysis, infrastructure as code, detection engineering, forensics, Kubernetes environments, high growth startup environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_41528416-21c"},"title":"Staff+ Software Security Engineer","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Team</strong></p>\n<p>The Security Engineering team protects Anthropic&#39;s AI systems and maintains the trust of our users and society. We define the authentication architecture for our training infrastructure, design the cryptographic foundations that protect model weights and training data, and drive the developer security program that shapes how engineers build and ship software.</p>\n<p><strong>About the role:</strong></p>\n<ul>\n<li>Scope, design, and build complex security systems end to end, maintaining them through production and driving through ambiguous technical challenges with minimal oversight</li>\n<li>Identify systematic risks through threat modeling and risk assessment, then build the controls and infrastructure that address them</li>\n<li>Mentor engineers across the security team and broader engineering organisation, contribute to hiring, and grow security engineering culture at Anthropic</li>\n<li>Enable other teams to build their own security solutions by providing design pattern guidance and expanding security ownership beyond the security team</li>\n</ul>\n<p><strong>Developer security and supply chain</strong></p>\n<ul>\n<li>Build and advance our developer security program by embedding security practices into the software development lifecycle and developer workflows</li>\n<li>Harden CI/CD pipelines against supply chain attacks through isolated build environments, signed attestations, dependency verification, and automated policy enforcement</li>\n</ul>\n<p><strong>Identity and secrets management</strong></p>\n<ul>\n<li>Architect systems that protect sensitive assets including model weights, customer data, and training datasets</li>\n<li>Build and operate credential issuance, rotation, and workload authentication across our multi-cloud environments</li>\n</ul>\n<p><strong>Infrastructure security</strong></p>\n<ul>\n<li>Implement and maintain cloud security controls including IAM, network segmentation, VPC architecture, and encryption across our multi-cloud and on-prem environments</li>\n<li>Contribute to cluster security controls including RBAC policies, namespace isolation, workload identity, and pod security</li>\n<li>Contribute to continuous cloud security posture management using infrastructure-as-code scanning, misconfiguration detection, and automated remediation</li>\n</ul>\n<p><strong>Secure frameworks</strong></p>\n<ul>\n<li>Build critical security foundations including cryptographic frameworks, mTLS infrastructure, secure serialization, and authorization systems, designed to prevent entire classes of vulnerabilities and empower engineering teams to work securely without becoming security experts themselves</li>\n<li>Partner with product, research, infrastructure, and other security teams to ensure frameworks integrate smoothly with lower-layer security controls</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>At least 8 years of software engineering experience with deep security expertise, including leading complex security initiatives independently</li>\n<li>Bachelor&#39;s degree in Computer Science or equivalent industry experience</li>\n<li>Strong programming skills in Python or at least one systems language such as Go, Rust, or C/C++</li>\n<li>Deep understanding of identity systems, cryptographic primitives, and secrets management</li>\n<li>Working knowledge of Kubernetes security primitives including RBAC, namespaces, network policies, and service accounts</li>\n<li>Experience leading cross-functional security initiatives and navigating complex organisational dynamics</li>\n<li>Outstanding communication skills, translating technical concepts effectively across all levels of the organisation</li>\n<li>A track record of bringing clarity and ownership to ambiguous technical problems and driving them to resolution</li>\n<li>Low ego and high empathy, with a history of growing the engineers around you and supporting diverse, inclusive teams</li>\n<li>Passion for AI safety and the role security engineering plays in building trustworthy AI systems</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Designed or operated identity and secrets management systems for large-scale AI or cloud infrastructure</li>\n<li>Built security frameworks or libraries adopted across an engineering organisation</li>\n<li>Led a developer security program including supply chain security, secure build infrastructure, and SDLC integrations</li>\n<li>Built or secured CI infrastructure using Nix, Bazel, or Kubernetes-based deploy systems, with depth in toolchain issues, CI/CD pipelines, and developer workflow optimisation</li>\n<li>Implemented machine identity or workload authentication systems using SPIFFE/SPIRE, mTLS, or equivalent</li>\n<li>Understanding of Linux systems internals including namespaces, cgroups, and seccomp, and how these underpin container and workload isolation</li>\n<li>Contributed to the security architecture of multi-cloud environments including network segmentation, data protection, and access governance</li>\n<li>Experience with network security controls including admission controllers, CNI-level policy, service mesh security, and east-west traffic enforcement</li>\n<li>Experience building runtime security monitoring using eBPF or kernel security policies</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None, applications will be received on a rolling basis.</p>\n<p><strong>The annual compensation range for this role is listed below.</strong></p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning the total amount of money an employee is expected to earn in a year, including bonuses and other forms of compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_41528416-21c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5120512008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"The annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (\"OTE\") range, meaning the total amount of money an employee is expected to earn in a year, including bonuses and other forms of compensation.","x-skills-required":["Python","Go","Rust","C/C++","Kubernetes","RBAC","namespaces","network policies","service accounts","identity systems","cryptographic primitives","secrets management"],"x-skills-preferred":["Nix","Bazel","Kubernetes-based deploy systems","SPIFFE/SPIRE","mTLS","Linux systems internals","namespaces","cgroups","seccomp","container and workload isolation","multi-cloud environments","network segmentation","data protection","access governance","admission controllers","CNI-level policy","service mesh security","east-west traffic enforcement","runtime security monitoring","eBPF","kernel security policies"],"datePosted":"2026-03-08T13:52:38.657Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, C/C++, Kubernetes, RBAC, namespaces, network policies, service accounts, identity systems, cryptographic primitives, secrets management, Nix, Bazel, Kubernetes-based deploy systems, SPIFFE/SPIRE, mTLS, Linux systems internals, namespaces, cgroups, seccomp, container and workload isolation, multi-cloud environments, network segmentation, data protection, access governance, admission controllers, CNI-level policy, service mesh security, east-west traffic enforcement, runtime security monitoring, eBPF, kernel security policies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25934fbc-c50"},"title":"Staff / Senior Software Engineer, Cloud Inference","description":"<p><strong>About the Role</strong></p>\n<p>The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.</p>\n<p>Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic&#39;s most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale.</p>\n<p>Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>\n<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>\n<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>\n<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>\n<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>\n<li>Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>\n<li>Contribute to inference features that must work consistently across all platforms</li>\n<li>Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>\n</ul>\n<p><strong>You May Be a Good Fit If You:</strong></p>\n<ul>\n<li>Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>\n<li>Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>\n<li>Have strong interest in inference</li>\n<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>\n<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>\n<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>\n<li>Pick up slack, even when it goes outside your job description</li>\n</ul>\n<p><strong>Strong Candidates May Also Have Experience With</strong></p>\n<ul>\n<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>\n<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>\n<li>Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments</li>\n<li>Strong familiarity with LLM inference optimization, batching, caching, and serving strategies</li>\n<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>\n<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>\n<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>\n<li>Proficiency in Python or Rust</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25934fbc-c50","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5107466008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$300,000 - $485,000 USD","x-skills-required":["Software engineering","Cloud infrastructure","Kubernetes","Infrastructure as Code","Container orchestration","LLM inference optimization","Batching","Caching","Serving strategies","Machine learning infrastructure","GPUs","TPUs","Trainium","AI accelerators","CI/CD systems","Deployment and validation","Cloud environments","Multi-region deployments","Geographic routing","Global traffic management"],"x-skills-preferred":["Python","Rust","Cloud platforms","Networking","Security","Privacy","Billing","Managed service offerings","Platform-agnostic tooling","Abstraction layers","Capacity management","Cost optimization","Resource planning"],"datePosted":"2026-03-08T13:49:59.956Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software engineering, Cloud infrastructure, Kubernetes, Infrastructure as Code, Container orchestration, LLM inference optimization, Batching, Caching, Serving strategies, Machine learning infrastructure, GPUs, TPUs, Trainium, AI accelerators, CI/CD systems, Deployment and validation, Cloud environments, Multi-region deployments, Geographic routing, Global traffic management, Python, Rust, Cloud platforms, Networking, Security, Privacy, Billing, Managed service offerings, Platform-agnostic tooling, Abstraction layers, Capacity management, Cost optimization, Resource planning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76d0b73d-4cb"},"title":"Solutions Engineer, Security Specialist","description":"<p><strong>Solutions Engineer, Security Specialist</strong></p>\n<p><strong>Location</strong></p>\n<p>Tokyo, Japan</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>The Technical Success team is responsible for ensuring the safe and effective deployment of ChatGPT and OpenAI API applications for developers and enterprises, acting as a trusted advisor so customers maximize value from our models and products.</p>\n<p>As OpenAI’s enterprise footprint grows—especially across regulated industries—security and compliance diligence is increasingly happening live with CISOs, risk teams, privacy officers, and auditors.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We are hiring a <strong>Security Solutions Engineer</strong> to serve as the <strong>customer-facing security and compliance pre-sales subject matter expert</strong> for priority customer accounts—especially in regulated industries. You will lead security deep dives, diligence workflows, and questionnaires, and help customers understand OpenAI’s security posture, controls, and architectural patterns.</p>\n<p>This role is designed to <strong>increase deal velocity and customer confidence</strong> while reducing the operational load on internal security teams by owning the customer-facing workstream and escalating selectively.</p>\n<p><strong><strong>In this role, you will</strong></strong></p>\n<ul>\n<li><strong>Lead customer security engagements end-to-end</strong>: discovery, security deep dives, live calls, follow-ups, and action tracking—especially for regulated customers.</li>\n</ul>\n<ul>\n<li><strong>Own security questionnaires/RFIs</strong> for priority customers: coordinate inputs, ensure accuracy, drive turnaround time, and manage escalations.</li>\n</ul>\n<ul>\n<li><strong>Translate security posture into customer-relevant narratives</strong>: data flows, tenant boundaries, identity and access controls, encryption, logging/monitoring, incident response, privacy controls, and risk mitigations.</li>\n</ul>\n<ul>\n<li><strong>Guide customers to standardized resources</strong> (e.g., trust collateral) and explain what is standard vs. what requires escalation or exceptions.</li>\n</ul>\n<ul>\n<li><strong>Partner closely with GRC and Security teams</strong> to escalate non-standard requirements, clarify control intent, and ensure customer-facing responses remain aligned with approved posture.</li>\n</ul>\n<ul>\n<li><strong>Create scalable enablement</strong>: playbooks, FAQs, response libraries, and training that reduce repeated work for Solutions Engineers and Sales.</li>\n</ul>\n<ul>\n<li><strong>Represent the voice of regulated customers internally</strong> by identifying themes and recurring blockers; propose improvements to packaging, documentation, and product readiness.</li>\n</ul>\n<p><strong><strong>You’ll thrive in this role if you</strong></strong></p>\n<ul>\n<li>Have <strong>5+ years (guideline)</strong> in a customer-facing security role such as security pre-sales/solutions engineering, security consulting, security architecture, or GRC-adjacent customer advisory in B2B SaaS or cloud environments.</li>\n</ul>\n<ul>\n<li>Can credibly engage and influence <strong>CISOs, security architects, privacy teams, and procurement/risk stakeholders</strong> in real-time discussions.</li>\n</ul>\n<ul>\n<li>Understand modern cloud/security fundamentals: IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, and secure SDLC.</li>\n</ul>\n<ul>\n<li>Are strong in structured writing and can produce crisp, consistent answers under time pressure (questionnaires, RFIs, executive summaries).</li>\n</ul>\n<ul>\n<li>Can operate in ambiguity, own problems end-to-end, and create repeatable processes that scale beyond yourself.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76d0b73d-4cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/79f7dfb2-3dff-4411-afb2-f0aacb1fa641","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["security pre-sales/solutions engineering","security consulting","security architecture","GRC-adjacent customer advisory","B2B SaaS","cloud environments","IAM","network/security architecture","encryption/key management concepts","logging/monitoring","vulnerability management","incident response","secure SDLC"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:41:37.318Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security pre-sales/solutions engineering, security consulting, security architecture, GRC-adjacent customer advisory, B2B SaaS, cloud environments, IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, secure SDLC"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7670f72a-ca5"},"title":"Security Solutions Engineer, Pre-Sales (Security Specialist) - APAC","description":"<p><strong>About the Team</strong></p>\n<p>The Technical Success team is responsible for ensuring the safe and effective deployment of ChatGPT and OpenAI API applications for developers and enterprises, acting as a trusted advisor so customers maximize value from our models and products.</p>\n<p>As OpenAI’s enterprise footprint grows—especially across regulated industries—security and compliance diligence is increasingly happening live with CISOs, risk teams, privacy officers, and auditors.</p>\n<p><strong>About the Role</strong></p>\n<p>We are hiring a <strong>Security Solutions Engineer</strong> to serve as the <strong>customer-facing security and compliance pre-sales subject matter expert</strong> for priority customer accounts—especially in regulated industries. You will lead security deep dives, diligence workflows, and questionnaires, and help customers understand OpenAI’s security posture, controls, and architectural patterns.</p>\n<p>This role is designed to <strong>increase deal velocity and customer confidence</strong> while reducing the operational load on internal security teams by owning the customer-facing workstream and escalating selectively.</p>\n<p>This role is based in Singapore. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li><strong>Lead customer security engagements end-to-end</strong>: discovery, security deep dives, live calls, follow-ups, and action tracking—especially for regulated customers.</li>\n</ul>\n<ul>\n<li><strong>Own security questionnaires/RFIs</strong> for priority customers: coordinate inputs, ensure accuracy, drive turnaround time, and manage escalations.</li>\n</ul>\n<ul>\n<li><strong>Translate security posture into customer-relevant narratives</strong>: data flows, tenant boundaries, identity and access controls, encryption, logging/monitoring, incident response, privacy controls, and risk mitigations.</li>\n</ul>\n<ul>\n<li><strong>Guide customers to standardized resources</strong> (e.g., trust collateral) and explain what is standard vs. what requires escalation or exceptions.</li>\n</ul>\n<ul>\n<li><strong>Partner closely with GRC and Security teams</strong> to escalate non-standard requirements, clarify control intent, and ensure customer-facing responses remain aligned with approved posture.</li>\n</ul>\n<ul>\n<li><strong>Create scalable enablement</strong>: playbooks, FAQs, response libraries, and training that reduce repeated work for Solutions Engineers and Sales.</li>\n</ul>\n<ul>\n<li><strong>Represent the voice of regulated customers internally</strong> by identifying themes and recurring blockers; propose improvements to packaging, documentation, and product readiness.</li>\n</ul>\n<p><strong>You’ll thrive in this role if you</strong></p>\n<ul>\n<li>Have <strong>5+ years (guideline)</strong> in a customer-facing security role such as security pre-sales/solutions engineering, security consulting, security architecture, or GRC-adjacent customer advisory in B2B SaaS or cloud environments.</li>\n</ul>\n<ul>\n<li>Can credibly engage and influence <strong>CISOs, security architects, privacy teams, and procurement/risk stakeholders</strong> in real-time discussions.</li>\n</ul>\n<ul>\n<li>Understand modern cloud/security fundamentals: IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, and secure SDLC.</li>\n</ul>\n<ul>\n<li>Are strong in structured writing and can produce crisp, consistent answers under time pressure (questionnaires, RFIs, executive summaries).</li>\n</ul>\n<ul>\n<li>Can operate in ambiguity, own problems end-to-end, and create repeatable processes that scale beyond yourself.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7670f72a-ca5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/215b02db-1cbf-4f97-8866-7a460ddf7b35","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["security pre-sales/solutions engineering","security consulting","security architecture","GRC-adjacent customer advisory","B2B SaaS","cloud environments","IAM","network/security architecture","encryption/key management concepts","logging/monitoring","vulnerability management","incident response","secure SDLC"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:37:25.183Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security pre-sales/solutions engineering, security consulting, security architecture, GRC-adjacent customer advisory, B2B SaaS, cloud environments, IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, secure SDLC"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0e50f5ba-8b9"},"title":"Hardware Development Infrastructure Engineer","description":"<p><strong>Hardware Development Infrastructure Engineer</strong></p>\n<p><strong>About the Team:</strong></p>\n<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI&#39;s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI&#39;s hardware development lifecycle. You&#39;ll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.</p>\n<p>This role sits at the intersection of hardware, cloud, HPC, DevOps, and data. You&#39;ll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Partner with hardware teams on workflows and tooling: Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.</li>\n</ul>\n<ul>\n<li>Build and operate regression systems at scale: Own regressions end-to-end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.</li>\n</ul>\n<ul>\n<li>Own CI/CD for infrastructure and tooling: Design and operate pipelines for infrastructure-as-code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.</li>\n</ul>\n<ul>\n<li>Run cloud and HPC platforms: Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node lifecycles, and cost-performance tradeoffs.</li>\n</ul>\n<ul>\n<li>Build data foundations and visibility: Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.</li>\n</ul>\n<ul>\n<li>Drive operational excellence: Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification).</li>\n</ul>\n<p>Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.</p>\n<ul>\n<li>Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure-as-code practices (e.g., Terraform, Bicep; configuration management tools a plus).</li>\n</ul>\n<p>Strong programming skills (Python preferred) and solid software engineering and scripting practices.</p>\n<ul>\n<li>Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, GitHub Actions), including testing and release workflows.</li>\n</ul>\n<ul>\n<li>Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.</li>\n</ul>\n<ul>\n<li>Clear communicator with strong judgment—able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience operating Slurm or other large-scale cluster schedulers.</li>\n</ul>\n<ul>\n<li>Experience with enterprise authentication and directory services (e.g., Entra ID, LDAP, FreeIPA, SSSD).</li>\n</ul>\n<ul>\n<li>Experience building or operating backend and middleware systems</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$260K – $335K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0e50f5ba-8b9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f2908f94-93a9-476b-ac83-b03392ae827d","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$260K – $335K • Offers Equity","x-skills-required":["chip development workflows","EDA domain","cloud platforms","networking","security","performance","automation","cloud environments","infrastructure-as-code","configuration management tools","programming skills","software engineering","scripting practices","CI/CD systems","testing","release workflows","database experience","schema design","migrations","indexing","operational safety"],"x-skills-preferred":["Slurm","enterprise authentication","directory services","backend and middleware systems"],"datePosted":"2026-03-06T18:28:58.829Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"chip development workflows, EDA domain, cloud platforms, networking, security, performance, automation, cloud environments, infrastructure-as-code, configuration management tools, programming skills, software engineering, scripting practices, CI/CD systems, testing, release workflows, database experience, schema design, migrations, indexing, operational safety, Slurm, enterprise authentication, directory services, backend and middleware systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":335000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9278e637-313"},"title":"Software Engineer, Core Services","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Software Engineer, Core Services</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Core Services team is responsible for building and managing foundational services. It acts as the bridge between core infrastructure (e.g. compute, storage, networking) and product engineering teams, and enables product teams to move fast, build reliably, and scale efficiently.</p>\n<p><strong>About the Role</strong></p>\n<p>As a software engineer in the core services team, you will design and operate critical backend platforms such as caching systems, workflow orchestration, metadata stores, and file services. You’ll focus on building highly reliable, scalable, and performant systems that serve as the backbone of our products.</p>\n<p>We’re looking for people who are passionate about building infrastructure that empowers product teams, love working on distributed systems challenges, and enjoy creating well-designed APIs and abstractions that accelerate development.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, build, and maintain shared infrastructure services such as caching layers, workflow orchestration (Temporal), metadata stores, and file storage services.</li>\n</ul>\n<ul>\n<li>Collaborate with product teams to provide scalable, reliable primitives that abstract the complexities of distributed systems.</li>\n</ul>\n<ul>\n<li>Improve performance, resilience, and scalability of core services that power customer-facing applications.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience with distributed systems, caching infrastructure (e.g., Redis, Memcached), metadata storage (e.g., FoundationDB), or workflow orchestration (e.g., Temporal, Cadence).</li>\n</ul>\n<ul>\n<li>Have experience running containerized services in cloud environments and integrating them into automated build/test/release (CI/CD) workflows.</li>\n</ul>\n<ul>\n<li>Understand trade-offs in consistency models, replication strategies, and performance optimization in multi-region systems.</li>\n</ul>\n<ul>\n<li>Excel at communication and collaboration with cross-functional teams, and are obsessed with delivering customer success.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9278e637-313","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/21bfde35-ffec-42d2-a2c6-8a03dad789d5","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $385K • Offers Equity","x-skills-required":["distributed systems","caching infrastructure","metadata storage","workflow orchestration","containerized services","cloud environments","automated build/test/release (CI/CD) workflows","consistency models","replication strategies","performance optimization"],"x-skills-preferred":["communication and collaboration","cross-functional teams","customer success"],"datePosted":"2026-03-06T18:24:27.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, caching infrastructure, metadata storage, workflow orchestration, containerized services, cloud environments, automated build/test/release (CI/CD) workflows, consistency models, replication strategies, performance optimization, communication and collaboration, cross-functional teams, customer success","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3de2c475-9ca"},"title":"Software Engineer, Database Systems","description":"<p><strong>Software Engineer, Database Systems</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team:</strong></p>\n<p>The Database Systems team specializes in high-performance distributed databases. Our team built Rockset, the real-time search, analytics, and vector database that powers all vector search and retrieval augmented generation (RAG) at OpenAI. In addition to retrieval, as an online database, Rockset powers core functionality across all of OpenAI&#39;s product lines and many critical internal use cases.</p>\n<p><strong>About the Role:</strong></p>\n<p>We are looking for engineers passionate about distributed systems, close-to-the-metal performance optimization (our core engine is written in C++), and building scalable database infrastructure from the ground up. As an engineer on the Database Systems team, you&#39;ll contribute to the core database engine, driving improvements across ingestion, query execution, indexing, and storage. You&#39;ll partner with teams across OpenAI to unlock new product capabilities and help scale online database reliability and throughput as usage grows by orders of magnitude.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Design, build, and operate high-performance distributed systems</li>\n</ul>\n<ul>\n<li>Identify and resolve performance bottlenecks to scale infrastructure to the next order of magnitude</li>\n</ul>\n<ul>\n<li>Define long-term technical direction and guide system evolution</li>\n</ul>\n<ul>\n<li>Collaborate with product, engineering, and research teams to deliver scalable and reliable infrastructure</li>\n</ul>\n<ul>\n<li>Dig deep into complex production issues across the stack</li>\n</ul>\n<ul>\n<li>Contribute to incident response, postmortems, and best practices for system reliability</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have significant experience building, scaling, and optimizing distributed systems at scale</li>\n</ul>\n<ul>\n<li>Are curious about database internals, storage engines, or low-latency query systems</li>\n</ul>\n<ul>\n<li>Enjoy debugging challenging performance issues in complex, high-throughput systems</li>\n</ul>\n<ul>\n<li>Have experience operating production clusters at scale (e.g., Kubernetes or other orchestration systems)</li>\n</ul>\n<ul>\n<li>Think rigorously about scalability, correctness, and reliability</li>\n</ul>\n<ul>\n<li>Thrive in fast-paced environments with high autonomy and impact</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>4+ years of relevant industry experience, with 2+ years leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Experience with distributed systems at scale, with a strong focus on performance, reliability, and scalability</li>\n</ul>\n<ul>\n<li>Strong communication skills and ability to collaborate across highly technical and cross-functional teams</li>\n</ul>\n<ul>\n<li>Proficiency in a systems programming language such as C++ (our core engine is written in C++) is strongly preferred</li>\n</ul>\n<ul>\n<li>Fluency in cloud environments (AWS, GCP, Azure) and IaC tools (Terraform or similar)</li>\n</ul>\n<ul>\n<li>Experience with Linux systems, CI/CD pipelines, and modern observability stacks (Prometheus, Grafana, etc.)</li>\n</ul>\n<ul>\n<li>Domain knowledge in areas such as databases, data systems, storage engines, indexing, and query processing is a plus but not required</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3de2c475-9ca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2b5e8e15-7952-4170-a927-2ad68e318ed6","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K • Offers Equity","x-skills-required":["distributed systems","C++","cloud environments","IaC tools","Linux systems","CI/CD pipelines","modern observability stacks"],"x-skills-preferred":["database internals","storage engines","low-latency query systems","Kubernetes","orchestration systems"],"datePosted":"2026-03-06T18:24:14.702Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, C++, cloud environments, IaC tools, Linux systems, CI/CD pipelines, modern observability stacks, database internals, storage engines, low-latency query systems, Kubernetes, orchestration systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67dcf42f-2dc"},"title":"Engineering Manager ChatGPT Infra","description":"<p><strong>Engineering Manager ChatGPT Infra</strong></p>\n<p><strong>Location</strong></p>\n<p>London, UK</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong><strong>About the Team:</strong></strong></p>\n<p>The ChatGPT Infrastructure team is responsible for the platform that powers ChatGPT, one of the fastest-growing consumer products in history. We build, scale, and operate the infrastructure that enables rapid experimentation, reliable deployment, and global delivery of AI-powered experiences. As we expand our global footprint, we’re investing in establishing a leadership presence in London to help shape our growing office and drive collaboration across OpenAI’s international teams.</p>\n<p><strong><strong>About the Role:</strong></strong></p>\n<p>We’re looking for an experienced Engineering Manager to lead the ChatGPT Infra team from our London office. In this dual role, you’ll be both a technical leader and the site lead for our London engineering hub. You’ll be responsible for building and mentoring a world-class infra team, helping to scale ChatGPT infrastructure, and fostering a strong, inclusive engineering culture at our growing international site.</p>\n<p>You will:</p>\n<ul>\n<li>Lead a team of infrastructure engineers focused on availability, scalability, and performance for ChatGPT.</li>\n</ul>\n<ul>\n<li>Collaborate closely with product and research teams to deliver a seamless and robust experience to millions of users.</li>\n</ul>\n<ul>\n<li>Define and drive technical strategy for key components such as deployment pipelines, service mesh, observability, and CI/CD systems.</li>\n</ul>\n<ul>\n<li>Partner with recruiting to grow the London engineering team and represent OpenAI in the local tech community.</li>\n</ul>\n<ul>\n<li>Serve as a cultural ambassador and people manager, supporting cross-functional collaboration and site operations.</li>\n</ul>\n<ul>\n<li>Operate with a high degree of autonomy and ownership, with support from global leaders and peers.</li>\n</ul>\n<p><strong><strong>Qualifications:</strong></strong></p>\n<ul>\n<li>7+ years of hands-on engineering experience, ideally in high-scale systems, distributed computing, or developer platforms.</li>\n</ul>\n<ul>\n<li>Demonstrated success in leading cross-functional projects and collaborating across product, infra, and research orgs.</li>\n</ul>\n<ul>\n<li>Passion for building strong, inclusive teams and mentoring engineers of all experience levels.</li>\n</ul>\n<ul>\n<li>Experience operating production services in cloud environments (e.g., AWS, GCP, Azure).</li>\n</ul>\n<ul>\n<li>Comfortable wearing multiple hats — from deep technical discussions to team planning and office leadership.</li>\n</ul>\n<ul>\n<li>Based in or willing to relocate to London.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67dcf42f-2dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/5a4ba7cb-4ba2-41d3-8e02-840617a0f571","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["high-scale systems","distributed computing","developer platforms","cloud environments","AWS","GCP","Azure","deployment pipelines","service mesh","observability","CI/CD systems"],"x-skills-preferred":["leadership","team management","cross-functional collaboration","site operations"],"datePosted":"2026-03-06T18:20:48.510Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"high-scale systems, distributed computing, developer platforms, cloud environments, AWS, GCP, Azure, deployment pipelines, service mesh, observability, CI/CD systems, leadership, team management, cross-functional collaboration, site operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1864285-b9b"},"title":"Senior Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Applied Scientist at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Applied Scientist, you will join the asset generation team in Microsoft AI focusing on image and video retrieval, recommendation and generation. You will build core generative AI solutions that power customer-facing AI solutions and services for Microsoft Advertising at Bing platforms. In this role, you&#39;ll combine solid computer vision skills with applied ML expertise to design, prototype, evaluate and ship production systems—using techniques like knowledge distillation, prompt engineering, reinforcement learning, image/video processing and rigorous evaluation/metrics to continuously improve image and video asset qualities. You&#39;ll partner closely across product, research, and service engineering to deliver the innovative and robust solutions for enterprise customers.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Research, develop, and build effective and innovative production-grade generative AI and classical computer vision systems, with end-to-end ownership from concept through deployment and service operations.</li>\n<li>Lead technical design for core GenAI capabilities on image and video assets (e.g., image and video generation, super-resolution and summarization) and make data-driven tradeoffs across quality, latency, cost, and safety.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Demonstrated solid modelling skills in training and inferencing on online and offline computer vision models and related performance optimization for latency and artifact.</li>\n<li>Experience with prompt engineering, knowledge distillation and post-training.</li>\n<li>Experience building and shipping generative AI systems (including image and video systems).</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Track record of delivering enterprise-facing AI products at scale.</li>\n<li>Experience building and operating ML/AI systems in cloud environments; familiarity with MLOps practices (Azure a plus).</li>\n<li>Experience in publishing papers in top-tier computer vision and machine learning conferences such as CVPR, ICML, ICCV, Neurips and ICLR.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1864285-b9b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-3/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["solid computer vision skills","applied ML expertise","knowledge distillation","prompt engineering","reinforcement learning","image/video processing"],"x-skills-preferred":["experience in publishing papers in top-tier computer vision and machine learning conferences","familiarity with MLOps practices","experience building and operating ML/AI systems in cloud environments"],"datePosted":"2026-03-06T07:23:44.042Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"solid computer vision skills, applied ML expertise, knowledge distillation, prompt engineering, reinforcement learning, image/video processing, experience in publishing papers in top-tier computer vision and machine learning conferences, familiarity with MLOps practices, experience building and operating ML/AI systems in cloud environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_49bcfb3f-03d"},"title":"Cloud Security Engineer","description":"<p>Perplexity is seeking a highly experienced and hands-on Cloud Security Engineer to join our dynamic security team. In this role, you&#39;ll lead efforts to build and maintain secure, scalable infrastructure that empowers engineers to innovate quickly and safely.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Partner with infrastructure and engineering teams to embed security into development workflows and promote secure-by-default patterns.</p>\n<ul>\n<li>Build Terraform modules with built-in security guardrails, such as logging, encryption, and automated threat detection enablement.</li>\n</ul>\n<ul>\n<li>Deploy cloud-native detection capabilities using AWS GuardDuty, Security Hub, and custom detection rules to identify credential compromise, crypto-mining, and lateral movement.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>8+ years of experience in Cloud Infrastructure, Platform Engineering, or similar roles.</li>\n</ul>\n<ul>\n<li>Proven track record of building and scaling infrastructure at high-growth technology companies.</li>\n</ul>\n<ul>\n<li>Deep understanding of cloud-native architectures, microservices, and distributed systems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_49bcfb3f-03d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/b932d73f-49f3-4367-8fa7-a22f760e55a3","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$220K – $405K","x-skills-required":["Cloud Infrastructure","Platform Engineering","Cloud-Native Architectures","Microservices","Distributed Systems"],"x-skills-preferred":["Python","Go","AI/ML Infrastructure","Multi-Cloud Environments"],"datePosted":"2026-03-04T12:27:52.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, London, New York City, Remote (United States), Serbia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Infrastructure, Platform Engineering, Cloud-Native Architectures, Microservices, Distributed Systems, Python, Go, AI/ML Infrastructure, Multi-Cloud Environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ebcdddc-764"},"title":"Software Full-Stack Developer","description":"<p>Porsche Engineering Romania is seeking a talented Software Full-Stack Developer to join our Digitalization &amp; Automation team, a core driver of the company’s digital transformation initiatives. Your technical expertise and problem-solving skills will be essential in building robust solutions that elevate our digital platforms and deliver an exceptional user experience.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>You take ownership of the development and maintenance of web applications across frontend and backend, you integrate frontend applications with backend APIs and authentication mechanisms (JWT, Azure AD)</li>\n<li>You build and evolve frontend applications using React and TypeScript, with a focus on clarity, maintainability, and usability</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>A Bachelor’s or Master’s degree in Information Technology or equivalent practical experience</li>\n<li>3+ years of experience in Full-Stack web development integrating frontend applications with RESTful APIs and practical knowledge of backend development using Python (FastAPI or similar)</li>\n<li>Hands-on experience with React and TypeScript in production environments</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ebcdddc-764","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Engineering Services GmbH","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19242","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Full-Stack web development","React","TypeScript","Python","FastAPI"],"x-skills-preferred":["Docker","cloud environments (Azure)"],"datePosted":"2025-12-24T10:06:10.734Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Full-Stack web development, React, TypeScript, Python, FastAPI, Docker, cloud environments (Azure)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dee38b1f-7ab"},"title":"Functional Expert for SAP PP","description":"<p><strong>What you&#39;ll do</strong></p>\n<p>Act as functional expert for SAP PP (and related modules: MM, QM, PM if relevant). Lead and support full lifecycle S/4HANA implementation and rollout projects. Conduct fit-gap analysis, define business requirements, and design process solutions. Configure and customize SAP PP processes (MRP, shop floor control, production orders, capacity planning, etc.). Collaborate closely with integration teams (EWM, TM, QM, and PM). Support testing (unit, integration, UAT) and training activities. Serve as subject matter expert for manufacturing clients in automotive or industrial sectors. Support pre-sales, knowledge sharing, and internal enablement activities.</p>\n<p><strong>What you need</strong></p>\n<p>Must have: Deep knowledge of SAP PP (Discrete Manufacturing, MRP, BOM/Routing, Order Management). Experience with SAP S/4HANA. Strong understanding of end-to-end manufacturing and logistics processes. Fluent in English (written &amp; spoken). Strong communication and stakeholder management skills. Experience leading or mentoring junior consultants. Nice-to-have: Integration knowledge with QM, PM, EWM, MES. German language skills. Automotive or manufacturing industry experience. SAP certification (PP or S/4HANA Manufacturing). Experience in public/private cloud environments.</p>\n<p><strong>Why this matters</strong></p>\n<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dee38b1f-7ab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP - A Porsche Company","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18618","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SAP PP","SAP S/4HANA","Manufacturing and logistics processes","English (written & spoken)","Communication and stakeholder management skills","Experience leading or mentoring junior consultants"],"x-skills-preferred":["Integration knowledge with QM, PM, EWM, MES","German language skills","Automotive or manufacturing industry experience","SAP certification (PP or S/4HANA Manufacturing)","Experience in public/private cloud environments"],"datePosted":"2025-12-08T16:27:24.661Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bucharest, Cluj, Timisoara"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SAP PP, SAP S/4HANA, Manufacturing and logistics processes, English (written & spoken), Communication and stakeholder management skills, Experience leading or mentoring junior consultants, Integration knowledge with QM, PM, EWM, MES, German language skills, Automotive or manufacturing industry experience, SAP certification (PP or S/4HANA Manufacturing), Experience in public/private cloud environments"}]}