{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/performance-tuning"},"x-facet":{"type":"skill","slug":"performance-tuning","display":"Performance Tuning","count":37},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d05f2d69-fce"},"title":"AI Product Engineer - Agentic AI Platforms (Financial Services)","description":"<p>We are seeking an experienced and innovative AI Product Engineer – Agentic Platforms to join our Financial Services Artificial Intelligence &amp; Business Lines (FS-ABL) practice. This role is ideal for a consulting technologist with deep expertise in modern GenAI tooling, agentic system design, and enterprise SDLC, who can partner directly with clients to envision, design, develop, and deploy Agentic AI platforms in regulated environments.</p>\n<p>In this role, you will work at the intersection of client advisory, AI product engineering, and delivery execution, helping banks, insurers, and capital markets firms transition from GenAI pilots to production-grade, governed, multi-agent systems. You will apply leading GenAI frameworks and LLM platforms , including Anthropic, OpenAI, LangChain, LangGraph, DSPy, and vector databases,while operating across the full Agentic SDLC.</p>\n<p>P&amp;C Insurance knowledge and experience is a significant plus. Additionally, familiarity with core insurance platforms like Guidewire, DuckCreek or Majesco will be extremely helpful to succeed in this role.</p>\n<p>We are looking for candidates across all levels of experience and expertise - junior through senior level AI Product Engineers.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Partner directly with Financial Services clients to identify, prioritize, and shape Agentic AI use cases across customer operations, underwriting, claims, risk, compliance, finance, and technology.</li>\n<li>Lead client workshops to define agent personas, responsibilities, autonomy boundaries, human-in-the-loop checkpoints, and escalation logic.</li>\n<li>Translate evolving business needs into agentic product backlogs, roadmaps, and MVP definitions.</li>\n<li>Support executive conversations around GenAI platform strategy, operating models, vendor selection, and scale-out approaches.</li>\n</ul>\n<p><strong>Agentic Platform &amp; Architecture Design</strong></p>\n<ul>\n<li>Design and implement multi-agent architectures using modern GenAI tooling, including:</li>\n<li>Planner, executor, reviewer/critic, and supervisor agents</li>\n<li>Tool-calling and function-calling agents</li>\n<li>Memory-enabled agents (conversation, semantic, episodic, and structured memory)</li>\n<li>Leverage LangChain and LangGraph for agent orchestration, workflows, and control flow.</li>\n<li>Apply DSPy and declarative prompt optimization techniques for repeatability, performance tuning, and regression control.</li>\n<li>Design agent interaction patterns such as hierarchical agents, collaborating agents, and event-driven agent workflows.</li>\n<li>Define standardized agent contracts, interfaces, and schemas to enable reuse and scale.</li>\n</ul>\n<p><strong>Agentic SDLC &amp; Engineering Delivery</strong></p>\n<ul>\n<li>Own delivery across the full Software Development Lifecycle (SDLC), extending it into a formal Agentic SDLC, including:</li>\n<li>Agent design specifications and behavior contracts</li>\n<li>Prompt, policy, and tool versioning</li>\n<li>Simulation environments and offline evaluation</li>\n<li>Automated testing of agent flows and guardrails</li>\n<li>Controlled rollout, telemetry-driven optimization, and continuous learning</li>\n<li>Build production-grade AI services primarily using Python, integrating:</li>\n<li>LLM providers such as Anthropic (Claude), OpenAI, and open-source models</li>\n<li>Retrieval-Augmented Generation (RAG) using vector databases (e.g., Pinecone, FAISS, Milvus, Weaviate)</li>\n<li>Implement CI/CD pipelines for agent code, prompts, and policies.</li>\n<li>Integrate GenAI agents with client systems via APIs, workflow engines, event streams, and data platforms.</li>\n</ul>\n<p><strong>Observability, Evaluation &amp; Optimization</strong></p>\n<ul>\n<li>Implement agent observability including tracing, decision logging, tool usage, and failure analysis.</li>\n<li>Apply evaluation frameworks for hallucination detection, consistency checks, and fitness scoring.</li>\n<li>Design feedback loops incorporating human-in-the-loop review and reinforcement.</li>\n<li>Monitor cost, latency, throughput, and behavioral drift across deployed agents.</li>\n</ul>\n<p><strong>Governance, Risk &amp; Financial Services Compliance</strong></p>\n<ul>\n<li>Design Agentic AI platforms aligned with Financial Services regulatory expectations, including:</li>\n<li>Auditability and traceability of agent decisions</li>\n<li>Model and prompt explainability</li>\n<li>Data privacy and security controls</li>\n<li>Resilience and fail-safe mechanisms</li>\n<li>Embed guardrails and policies addressing hallucination risk, bias, unauthorized actions, and escalation failures.</li>\n<li>Produce documentation supporting risk, compliance, internal audit, and regulator engagement.</li>\n</ul>\n<p><strong>Team Leadership &amp; Firm Contribution</strong></p>\n<ul>\n<li>Provide technical leadership and mentorship to consulting delivery teams.</li>\n<li>Contribute to internal GenAI accelerators, agent frameworks, and reusable assets.</li>\n<li>Support RFPs, proposals, and client solution designs with credible GenAI and agentic architectures.</li>\n<li>Participate in thought leadership on Agentic SDLC, GenAI engineering, and responsible autonomy.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d05f2d69-fce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/nNAFrJUQSrP1dcSBxRDpM5/hybrid-ai-product-engineer---agentic-ai-platforms-(financial-services)-in-new-york-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","GenAI","LLM","LangChain","LangGraph","DSPy","vector databases","APIs","workflow engines","event streams","data platforms","Agentic SDLC","agent design","agent architecture","agent interaction","agent contracts","interfaces","schemas","prompt optimization","performance tuning","regression control","CI/CD pipelines","agent code","prompts","policies","GenAI agents","client systems","traceability","decision logging","tool usage","failure analysis","hallucination detection","consistency checks","fitness scoring","human-in-the-loop review","reinforcement","cost","latency","throughput","behavioral drift","auditability","model explainability","data privacy","security controls","resilience","fail-safe mechanisms","guardrails","risk management","compliance","internal audit","regulator engagement"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:20:10.866Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, GenAI, LLM, LangChain, LangGraph, DSPy, vector databases, APIs, workflow engines, event streams, data platforms, Agentic SDLC, agent design, agent architecture, agent interaction, agent contracts, interfaces, schemas, prompt optimization, performance tuning, regression control, CI/CD pipelines, agent code, prompts, policies, GenAI agents, client systems, traceability, decision logging, tool usage, failure analysis, hallucination detection, consistency checks, fitness scoring, human-in-the-loop review, reinforcement, cost, latency, throughput, behavioral drift, auditability, model explainability, data privacy, security controls, resilience, fail-safe mechanisms, guardrails, risk management, compliance, internal audit, regulator engagement"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_376da89d-421"},"title":"HPC Manager","description":"<p>We are currently looking for an experienced HPC Manager to be responsible for the management, performance, and continuous evolution of the High Performance Computing (HPC) environment supporting CFD workloads and all related services at our UK site in Milton Keynes.</p>\n<p>The role ensures maximum availability, performance, and scalability of the CFD compute cluster and its ecosystem, enabling engineering teams to run complex simulations efficiently in a highly competitive, performance-driven environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>HPC &amp; CFD Infrastructure Management: Own and manage the CFD HPC cluster, including compute, storage, and high-performance networking; ensure optimal performance and availability of CFD workloads; manage job scheduling, resource allocation, and workload prioritization; oversee performance tuning, benchmarking, and system optimization; maintain and evolve parallel file systems and data pipelines supporting CFD; drive capacity planning and future HPC architecture evolution; willingness to travel occasionally to our UK branch in Milton Keynes (DC site); availability to respond to critical issues affecting the computing cluster, including during weekends when necessary.</li>\n</ul>\n<ul>\n<li>Collaboration with Engineering: Work closely with CFD and engineering teams to optimize simulation workflows; support users in maximizing efficiency of HPC resources; act as primary point of contact for HPC-related topics in the UK site.</li>\n</ul>\n<ul>\n<li>Operations &amp; Reliability: Ensure 24/7 reliability of HPC services supporting CFD activities; implement monitoring, alerting, and automation; lead troubleshooting of complex system and performance issues; manage software stack, compilers, libraries, and tools used in CFD environments.</li>\n</ul>\n<ul>\n<li>Leadership &amp; Continuous Improvement: Lead and develop a team of HPC engineers/administrators; define best practices, documentation, and operational procedures; continuously evaluate new technologies (GPU, cloud, hybrid HPC); drive efficiency, scalability, and innovation across HPC services.</li>\n</ul>\n<p>What We Offer:</p>\n<ul>\n<li>Working in a young, collaborative and international environment.</li>\n<li>Tailored training.</li>\n<li>Company Events / Briefings.</li>\n<li>On site Gym.</li>\n<li>Bonus scheme.</li>\n<li>Annual salary review process.</li>\n<li>Meal Tickets.</li>\n<li>Free additional health insurance.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_376da89d-421","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Visa Cash App Racing Bulls Formula 1 Team","sameAs":"https://jobs.redbull.com","logo":"https://logos.yubhub.co/jobs.redbull.com.png"},"x-apply-url":"https://jobs.redbull.com/gb-en/milton-keynes-vcarb-f1-team-hpc-manager-prv-ref30239o","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux","cluster management","HPC schedulers","InfiniBand","low-latency networking","parallel file systems","CFD workloads","simulation environments","performance tuning","optimization","leadership","stakeholder management","English"],"x-skills-preferred":["GPU computing","container technologies","automation","scripting"],"datePosted":"2026-04-24T13:17:06.413Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Milton Keynes"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Linux, cluster management, HPC schedulers, InfiniBand, low-latency networking, parallel file systems, CFD workloads, simulation environments, performance tuning, optimization, leadership, stakeholder management, English, GPU computing, container technologies, automation, scripting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88313c8a-9fa"},"title":"Software Engineer Full Stack","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II - Full Stack for Gameplay Services, you will work on providing systems and tooling enabling game teams to leverage our matchmaking system, integrated in EA&#39;s biggest titles and enjoyed by millions of players worldwide.</p>\n<p>Our platform powers online features for EA&#39;s games, serving millions of users each day. We live, breathe, and dream about how we can make every player&#39;s multiplayer experience memorable. We develop services and SDKs in collaboration with EA&#39;s game studios for matchmaking, stats and leaderboards, achievements, game replays, VOIP, and game networking.</p>\n<p>Your focus will be on providing systems and tooling enabling game teams to leverage our matchmaking system. You will collaborate closely with your team and partner studios to maintain, enhance, and extend our core services.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design brand new services covering all aspects from storage to application logic to management console</li>\n<li>Enhance and add features to existing systems</li>\n<li>Communicate with engineers from across the company to deliver the next generation of online features for both established and not-yet-released games</li>\n<li>Be a part of the full product cycle for our products, from design and testing to deployment and supporting our LIVE environments and our game team customers</li>\n<li>Maintain a suite of automated tests that validate the correctness of backend services</li>\n<li>Advocate for best practices within the engineering team</li>\n<li>Work with product managers to improve new features to support EA&#39;s business</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor/Master&#39;s degree in Computer Science, Computer Engineering or related field</li>\n<li>2+ years professional programming experience</li>\n<li>Experience with various programming languages and frameworks (React, Typescript, NodeJS, Golang)</li>\n<li>Deep understanding of HTML, CSS and DOM</li>\n<li>Experience with cloud computing products such as AWS EC2, ElastiCache, and ELB</li>\n<li>Experience with technologies such as Docker, Kubernetes, and Terraform</li>\n<li>Experience with relational or NoSQL database</li>\n<li>Experience with all phases of product development lifecycle, including requirement definition, development, test, and product release</li>\n<li>Adept at solving complex technical problems</li>\n<li>Strong sense of collaboration</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Motivated self-starter and able to operate with autonomy</li>\n</ul>\n<p>Bonus Qualifications:</p>\n<ul>\n<li>Experience with Jenkins and Groovy</li>\n<li>Experience with Ansible</li>\n<li>Knowledge of Google gRPC and protobuf</li>\n<li>Experience with high traffic services and highly scalable, distributed systems</li>\n<li>Knowledge of scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3</li>\n<li>Experience with stress testing plus performance tuning and optimization</li>\n<li>Experience working within the games industries</li>\n</ul>\n<p>We thought you might also want to know</p>\n<p>The benefits and perks of working for EA</p>\n<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88313c8a-9fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/211085","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","Typescript","NodeJS","Golang","HTML","CSS","DOM","AWS EC2","ElastiCache","ELB","Docker","Kubernetes","Terraform","relational database","NoSQL database","product development lifecycle"],"x-skills-preferred":["Jenkins","Groovy","Ansible","Google gRPC","protobuf","high traffic services","distributed systems","scalable data storage","Apache Spark","AWS S3","stress testing","performance tuning","games industries"],"datePosted":"2026-04-24T13:15:39.091Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Typescript, NodeJS, Golang, HTML, CSS, DOM, AWS EC2, ElastiCache, ELB, Docker, Kubernetes, Terraform, relational database, NoSQL database, product development lifecycle, Jenkins, Groovy, Ansible, Google gRPC, protobuf, high traffic services, distributed systems, scalable data storage, Apache Spark, AWS S3, stress testing, performance tuning, games industries"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6b05875-c34"},"title":"Workday Absence and Time Lead","description":"<p>At Keywords Studios, we are looking for a Workday Absence and Time Lead to serve as the functional expert, leading the solutioning for all time-related needs within the Workday system. This role is crucial in ensuring the business can effectively manage time-off, absence, and employee time-tracking processes in line with Keywords Studios&#39; P&amp;C (People and Culture) and operational strategies.</p>\n<p>The Workday Absence and Time Lead will play a pivotal role in the design, configuration, and ongoing maintenance of the Workday Time Tracking, Time Off, and Absence Management modules. They will collaborate closely with P&amp;C, Finance, and Operations teams to build and support scalable technology solutions for a global employee population.</p>\n<p>The focus is on driving configuration, maintaining system stability, and ensuring queries are managed effectively through the Cloud Support Model&#39;s tiered system. The position will support in value realisation and migration from legacy solutions to a Workday Enabled centralised service.</p>\n<p>This position requires deep knowledge of the Workday platform, specifically its core HCM capabilities related to Time and Absence and its overlay into Time tracking, Time Sheeting and Projects.</p>\n<p>Main Responsibilities:</p>\n<ul>\n<li>Serve as the technical subject matter expert on Workday Absence, Time-Off and Time Tracking capabilities, leveraging analytical capabilities and process orientation.</li>\n</ul>\n<ul>\n<li>Provide expertise on Workday best practices and partner with Human Resources and other stakeholders to design cohesive solutions, mitigating cross-functional and technical impacts.</li>\n</ul>\n<ul>\n<li>Ensure implemented solutions meet the transactional, reporting, and analytical needs of the business for time and absences processes.</li>\n</ul>\n<ul>\n<li>Supporting the mapping of Time and Attendance information to external payroll partners with support of the integrations lead.</li>\n</ul>\n<ul>\n<li>Conduct regular system audits and performance tuning for the talent modules to ensure optimal performance and reliability.</li>\n</ul>\n<ul>\n<li>Stay current with Workday releases and updates, evaluating new features and functionality for potential adoption in the talent space.</li>\n</ul>\n<ul>\n<li>Manage the Workday Releases by evaluating updates and discussing the impacts and benefits with the business.</li>\n</ul>\n<ul>\n<li>Support P&amp;C and the Business in the education of end-users and ensure effective change management for new functionality and process changes.</li>\n</ul>\n<ul>\n<li>Break down business needs into actionable steps toward solutions and refine conceptual approaches into concrete, specific investments.</li>\n</ul>\n<ul>\n<li>Partner with P&amp;C, IT and the Business to define the multi-year Workday roadmap, focusing on achieving tangible near-term results for Absence, Time-Off and Time Tracking.</li>\n</ul>\n<ul>\n<li>Support P&amp;C and the business with Absence, Time-Off and Time Tracking related projects, ensuring technical feasibility and timely delivery.</li>\n</ul>\n<ul>\n<li>Within the Support Model, this role provides Tier 3 specialist functional knowledge and configuration support for their domain.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Proven experience in a technology product management or functional lead role, focusing on Human Capital Management (HCM) systems.</li>\n</ul>\n<ul>\n<li>Minimum of 5 years of Workday Experience with specific expertise in configuring and managing the Absence, Time-Off and Time Tracking module.</li>\n</ul>\n<ul>\n<li>Workday Qualification for Absence plans and Time Tracking</li>\n</ul>\n<ul>\n<li>Knowledge of Workday architecture, configuration, business processes, and security models as they apply.</li>\n</ul>\n<p>Including experience managing and supporting the workday bi-annual release cycle.</p>\n<ul>\n<li>Strong communication and presentation skills, with the ability to articulate complex technical concepts to diverse audiences.</li>\n</ul>\n<ul>\n<li>Experience translating end-user stated needs into scalable, system-minded solutions.</li>\n</ul>\n<ul>\n<li>Solid organizational skills, attention to detail, and a strong inclination for planning strategy and tactics.</li>\n</ul>\n<p>Benefits:</p>\n<p>This position is remote and open globally, compensation will vary depending on the region you are applying from.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6b05875-c34","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Keywords Studios","sameAs":"https://www.keywordsstudios.com","logo":"https://logos.yubhub.co/keywordsstudios.com.png"},"x-apply-url":"https://apply.workable.com/j/67B717C994","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Workday","Absence and Time Tracking","Human Capital Management (HCM)","Cloud Support Model","Talent modules","System audits","Performance tuning","Workday releases","Updates","New features","Functionality","Configuration","Security models","Business processes","Architecture"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:09:32.168Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Philippines"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Workday, Absence and Time Tracking, Human Capital Management (HCM), Cloud Support Model, Talent modules, System audits, Performance tuning, Workday releases, Updates, New features, Functionality, Configuration, Security models, Business processes, Architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d57ab2d-f3b"},"title":"Cloud Solution Architect","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow&#39;s transportation.</p>\n<p>If you&#39;re looking for the chance to leverage advanced technology to redefine the transportation landscape, enhance the customer experience, and improve people&#39;s lives: this is the opportunity for you. Join us and challenge your IT expertise and analytical skills to help create vehicles that are as smart as you are.</p>\n<p>To meet the growing needs of the Customer analytics business, the team is looking for a self-motivated, technically proficient individual to craft and shepherd coherent solutions. This will require collaboration with a range of stakeholders to clarify requirements, establish pragmatic approaches, and support and articulate decisions over time. You will join a cloud architecture team that works closely with engineering teams and other architects across the organisation.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Technical Requirements</strong></p>\n<ul>\n<li>Extensive experience with Google Cloud Platform (GCP), specifically BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner and Apigee.</li>\n</ul>\n<ul>\n<li>Security &amp; Networking: Strong understanding of cloud security protocols, IAM, encryption, and complex network topologies.</li>\n</ul>\n<ul>\n<li>Data Management: Proficiency in Enterprise Data Platforms, Data mesh architecture and data-driven architectural patterns.</li>\n</ul>\n<ul>\n<li>DevOps Tooling: Hands-on experience with GitHub, SonarQube, Checkmarx, and FOSSA.</li>\n</ul>\n<ul>\n<li>Software Engineering: Strong background in building Web Services and maintaining Clean Code standards.</li>\n</ul>\n<p><strong>Technical Leadership &amp; Strategy</strong></p>\n<ul>\n<li>System Design: Work with engineering teams to refine system designs, evangelising for horizontal scalability, resilience, and Clean Code compliance.</li>\n</ul>\n<ul>\n<li>Product Collaboration: Partner with Product Managers to decompose complex business needs into incremental, production-ready user stories within an Agile/Sprint methodology.</li>\n</ul>\n<ul>\n<li>Architectural Governance: Assess and document the rationale and tradeoffs for technical decisions; contribute to the broader Cloud Architecture team to improve global practices.</li>\n</ul>\n<ul>\n<li>DevOps Excellence: Utilise and improve CI/CD pipelines using GitHub and automated testing/security tools to maximise deployment efficiency and minimise risk.</li>\n</ul>\n<p><strong>Cloud, Networking &amp; Security</strong></p>\n<ul>\n<li>Secure Infrastructure: Serve as the primary architect for cloud solutions, ensuring &#39;Secure-by-Design&#39; principles are applied across Google Cloud services (Dataflow, Dataproc, CloudRun, CloudSQL, Spanner).</li>\n</ul>\n<ul>\n<li>Advanced Networking: Design and optimise cloud networking configurations, including VPCs, Service Controls, Load Balancing, and Private Service Connect to ensure high availability and low latency.</li>\n</ul>\n<ul>\n<li>Cyber Security Oversight: Integrate security scanning and compliance into the architecture (utilising Checkmarx, SonarQube, and FOSSA). Proactively address vulnerabilities in distributed systems and AI models (e.g., OWASP Top 10 for LLMs).</li>\n</ul>\n<ul>\n<li>API &amp; Data Contracts: Bolster &#39;Data as a Product&#39; practices by enforcing strict API standards and data contracts to ensure seamless, secure interoperability between services.</li>\n</ul>\n<ul>\n<li>FinOps &amp; Cost Optimisation: Drive fiscal responsibility by right-sizing GCP resources and optimising Generative AI architectures (token management/model selection) to maximise ROI.</li>\n</ul>\n<ul>\n<li>SRE &amp; Performance Tuning: Apply Site Reliability Engineering principles to ensure high availability, minimise system latency, and lead root-cause analysis for complex, distributed system failures.</li>\n</ul>\n<ul>\n<li>DevSecOps &amp; Problem Solving: Integrate security automation into CI/CD pipelines to ensure &#39;Secure-by-Design&#39; deployments while solving complex architectural trade-offs between speed, scale, and risk.</li>\n</ul>\n<ul>\n<li>Continuous Learning: Stay at the forefront of AI research, specifically regarding autonomous agents, prompt engineering etc</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>AI development tools and frameworks (e.g., LangChain, LangGraph, or Agent Dev Kit) to accelerate the delivery of intelligent applications.</li>\n</ul>\n<ul>\n<li>Agentic &amp; GenAI Design: Lead the architectural design of Agentic AI systems (multi-agent orchestration) and Generative AI solutions, including Retrieval-Augmented Generation (RAG) patterns and LLM integration.</li>\n</ul>\n<ul>\n<li>Kubernetes (GKE): Experience managing containerised workloads at scale.</li>\n</ul>\n<ul>\n<li>Kafka/Event-Driven Design: Experience with high-throughput messaging and event-driven architectures.</li>\n</ul>\n<ul>\n<li>MLOps: Familiarity with the end-to-end lifecycle of machine learning models in production.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<p><strong>You&#39;ll have...</strong></p>\n<ul>\n<li>Requires a bachelor&#39;s or foreign equivalent degree in computer science, information technology or a technology related field</li>\n</ul>\n<ul>\n<li>5+ years of Software engineering experience using Java or Python developing services (APIs, REST, etc.)</li>\n</ul>\n<ul>\n<li>2+ years of experience with Google Cloud Platform or other cloud service provider (AWS, Azure, etc.) and associated cloud components.</li>\n</ul>\n<ul>\n<li>Experience designing/architecting and running distributed systems in a production environment</li>\n</ul>\n<ul>\n<li>STRONG communications skills and cognitive agility - ability to engage in deep technical discussions with customers and peers, become a trusted technical advisor, and maintain good documentation</li>\n</ul>\n<p><strong>Even better, you may have...</strong></p>\n<ul>\n<li>Master&#39;s degree in computer science, electrical engineering or a closely related field of study</li>\n</ul>\n<ul>\n<li>Familiarity with a breadth of programming languages, platforms, and systems</li>\n</ul>\n<ul>\n<li>Experience with asynchronous messaging and eventually consistent system design</li>\n</ul>\n<ul>\n<li>An agile, pragmatic, and empirical mindset</li>\n</ul>\n<ul>\n<li>Critical thinking, decision-making and leadership aptitudes</li>\n</ul>\n<ul>\n<li>Good organisational and problem-solving abilities</li>\n</ul>\n<ul>\n<li>MDM, Entity Resolution, Customer Analytics and Marketing Analytics experience is a huge plus.</li>\n</ul>\n<p>You may not check every box, or your experience may look a little different from what we&#39;ve outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply!</p>\n<p><strong>As an established global company, we offer the benefit of choice. You can choose what your Ford future will look like: will your story span the globe, or keep you close to home? Will your career be a deep dive into what you love, or a series of new teams and new skills? Will you be a leader, a changemaker, a technical expert, a culture builder…or all of the above? No matter what you choose, we offer a work life that works for you, including:</strong></p>\n<ul>\n<li>Immediate medical, dental, and prescription drug coverage</li>\n</ul>\n<ul>\n<li>Flexible family care, parental leave, new parent ramp-up programs, subsidised back-up child care and more</li>\n</ul>\n<ul>\n<li>Vehicle discount programme for employees and family members, and management leases</li>\n</ul>\n<ul>\n<li>Tuition assistance</li>\n</ul>\n<ul>\n<li>Established and active employee resource groups</li>\n</ul>\n<ul>\n<li>Paid time off for individual and team community service</li>\n</ul>\n<ul>\n<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>\n</ul>\n<ul>\n<li>Paid time off and the option to purchase additional vacation time.</li>\n</ul>\n<p><strong>For a detailed look at our benefits, click here:</strong> Benefit Summary</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d57ab2d-f3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://corporate.ford.com/","logo":"https://logos.yubhub.co/corporate.ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62370","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$115,000-$192,900","x-skills-required":["Google Cloud Platform","BigQuery","Vertex AI","Dataflow","Dataproc","Cloud Run","CloudSQL","Spanner","Apigee","Security & Networking","IAM","Encryption","Complex Network Topologies","Data Management","Enterprise Data Platforms","Data Mesh Architecture","Data-Driven Architectural Patterns","DevOps Tooling","GitHub","SonarQube","Checkmarx","FOSSA","Software Engineering","Web Services","Clean Code Standards","System Design","Horizontal Scalability","Resilience","Clean Code Compliance","Product Collaboration","Agile/Sprint Methodology","Architectural Governance","Cloud Architecture","DevOps Excellence","CI/CD Pipelines","Automated Testing/Security Tools","Secure Infrastructure","Secure-by-Design Principles","Cloud Services","Advanced Networking","VPCs","Service Controls","Load Balancing","Private Service Connect","Cyber Security Oversight","Security Scanning","Compliance","Distributed Systems","AI Models","API & Data Contracts","Data as a Product","API Standards","Data Contracts","Seamless Interoperability","FinOps & Cost Optimisation","Fiscal Responsibility","GCP Resources","Generative AI Architectures","Token Management","Model Selection","ROI Maximisation","SRE & Performance Tuning","High Availability","System Latency","Root-Cause Analysis","DevSecOps & Problem Solving","Security Automation","Continuous Learning","AI Research","Autonomous Agents","Prompt Engineering","Kubernetes","Containerised Workloads","Kafka/Event-Driven Design","High-Throughput Messaging","Event-Driven Architectures","MLOps","Machine Learning Models","End-to-End Lifecycle"],"x-skills-preferred":["AI Development Tools","Frameworks","LangChain","LangGraph","Agent Dev Kit","Agentic & GenAI Design","Multi-Agent Orchestration","Generative AI Solutions","Retrieval-Augmented Generation","LLM Integration","Kubernetes (GKE)"],"datePosted":"2026-04-24T12:22:00.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Google Cloud Platform, BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner, Apigee, Security & Networking, IAM, Encryption, Complex Network Topologies, Data Management, Enterprise Data Platforms, Data Mesh Architecture, Data-Driven Architectural Patterns, DevOps Tooling, GitHub, SonarQube, Checkmarx, FOSSA, Software Engineering, Web Services, Clean Code Standards, System Design, Horizontal Scalability, Resilience, Clean Code Compliance, Product Collaboration, Agile/Sprint Methodology, Architectural Governance, Cloud Architecture, DevOps Excellence, CI/CD Pipelines, Automated Testing/Security Tools, Secure Infrastructure, Secure-by-Design Principles, Cloud Services, Advanced Networking, VPCs, Service Controls, Load Balancing, Private Service Connect, Cyber Security Oversight, Security Scanning, Compliance, Distributed Systems, AI Models, API & Data Contracts, Data as a Product, API Standards, Data Contracts, Seamless Interoperability, FinOps & Cost Optimisation, Fiscal Responsibility, GCP Resources, Generative AI Architectures, Token Management, Model Selection, ROI Maximisation, SRE & Performance Tuning, High Availability, System Latency, Root-Cause Analysis, DevSecOps & Problem Solving, Security Automation, Continuous Learning, AI Research, Autonomous Agents, Prompt Engineering, Kubernetes, Containerised Workloads, Kafka/Event-Driven Design, High-Throughput Messaging, Event-Driven Architectures, MLOps, Machine Learning Models, End-to-End Lifecycle, AI Development Tools, Frameworks, LangChain, LangGraph, Agent Dev Kit, Agentic & GenAI Design, Multi-Agent Orchestration, Generative AI Solutions, Retrieval-Augmented Generation, LLM Integration, Kubernetes (GKE)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":115000,"maxValue":192900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0090f2b1-e91"},"title":"Intermediate Backend Engineer, Database Automation (Ruby)","description":"<p>As an Intermediate Backend Engineer in the Database Automation team, you&#39;ll develop and enhance the frameworks, patterns, and tooling that keep GitLab&#39;s application datastores scalable, healthy, and safe across GitLab.com and thousands of self-managed instances.</p>\n<p>You&#39;ll work closely with experienced engineers and cross-functional teams to build reliable backend features, learn best practices in data architecture and lifecycle management, and contribute to identifying and addressing performance improvements in our infrastructure.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>SQL Traffic Replay Tooling</li>\n<li>Background Operations Framework</li>\n</ul>\n<p>In this role, you&#39;ll develop and iterate backend features and data frameworks that make it safe and efficient to work with data at scale across GitLab.com and self-managed deployments.</p>\n<p>You&#39;ll work with product management, UX, frontend, infrastructure, software delivery, and analytics teams to design and ship high-performing, reliable solutions.</p>\n<p>You&#39;ll review and improve database-related changes from other engineers and external contributors to ensure data integrity, safety, and performance across diverse environments.</p>\n<p>You&#39;ll design, build, and maintain tooling and guardrails such as SQL traffic replay and background operations frameworks to proactively detect and remediate scalability, performance, and data health issues.</p>\n<p>You&#39;ll research, design, and implement improvements to database performance, scalability, and data health, including areas like soft delete strategies and database migration testing.</p>\n<p>You&#39;ll document database best practices, anti-patterns, and data architecture guidance so developers can make informed, consistent choices.</p>\n<p>You&#39;ll develop solutions for database upgrade paths and migration strategies that maintain backwards compatibility while reducing downtime and operational friction for self-managed customers with diverse deployment configurations.</p>\n<p>In this role, you&#39;ll succeed by shipping incremental improvements and, over time, building the capability to fully own larger pieces of work with shorter revision cycles.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0090f2b1-e91","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8481029002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PostgreSQL","Ruby on Rails","Database performance tuning","Troubleshooting","Software design","Algorithms","Performance trade-offs"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:43.966Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, Ruby on Rails, Database performance tuning, Troubleshooting, Software design, Algorithms, Performance trade-offs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4ac418cd-5dc"},"title":"After Sales Strategy and Process Improvement Specialist","description":"<p>This role is responsible for developing and implementing strategic concepts and initiatives to ensure attainment of all After Sales objectives. The position involves developing and administering continuous improvement processes to increase efficiency and optimize effectiveness across business processes from/to Porsche AG &amp; Porsche Cars North America. Key responsibilities include assisting in the development and execution of multiple projects, participating in the gathering of requirements from business partners for the development of data analytic tools and reports, and providing live phone support to PCNA dealers in the event of troubles or questions arising.</p>\n<p>The ideal candidate will have a strong understanding of data warehousing fundamentals, SQL Server, and relational/dimensional database design. They will also possess excellent oral and written communication, presentation, and problem-solving skills. Experience working with large-scale data to build reporting solutions and knowledge of optimizing and performance tuning SQL and reports are highly desirable.</p>\n<p>In addition to the above, the successful candidate will be a junior or senior in undergraduate studies, with a minimum of a bachelor&#39;s degree in a relevant field. They will be organized, positive, proactive, results-oriented, and able to work effectively in an open office/noisy environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4ac418cd-5dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Porsche Cars North America","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20149","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":"$18-$20 per hour","x-skills-required":["data warehousing fundamentals","SQL Server","relational/dimensional database design","large-scale data","reporting solutions","optimizing and performance tuning SQL and reports"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:26:00.341Z","employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Automotive","skills":"data warehousing fundamentals, SQL Server, relational/dimensional database design, large-scale data, reporting solutions, optimizing and performance tuning SQL and reports"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88132c81-446"},"title":"Staff Software Engineer, Data Platform","description":"<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>\n<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>\n<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>\n<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>\n<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>\n<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88132c81-446","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4649903005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["database technologies","streaming/processing solutions","indexing/caching","data query engines","containerization & deployment technologies","public cloud offerings","software development","distributed systems","cloud platforms","data systems"],"x-skills-preferred":["performance tuning","cost optimizations","data lifecycle strategy","data privacy","hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T16:00:04.417Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization & deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4920db00-eb9"},"title":"Senior Backend Engineer (RoR), SSCS: Authorization","description":"<p>As a Senior Backend Engineer on the Authorization team at GitLab, you&#39;ll build and evolve the core systems that decide who can access what across the entire GitLab platform, directly impacting millions of users from startups to large enterprises.</p>\n<p>You&#39;ll architect and implement our next-generation authorization infrastructure, including policy-as-code approaches, fine-grained permissions, and performance optimizations at massive scale, enabling GitLab&#39;s move toward zero-trust architecture while keeping authorization fast, secure, and correct.</p>\n<p>You&#39;ll work closely with Security, Database, Platform, and authentication-focused teams to design and ship authorization capabilities that span GitLab&#39;s various deployment models and multi-tenant environments.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>Implementing fine-grained permissions for Job Tokens, Personal Access Tokens, and the GitLab Duo agent platform</li>\n</ul>\n<ul>\n<li>Collaborating on Auth stack initiatives that evolve how authorization works across GitLab</li>\n</ul>\n<ul>\n<li>Implement fine-grained permission systems for Job Tokens, Personal Access Tokens, the GitLab Duo Agent Platform, and other authentication mechanisms across the GitLab platform.</li>\n</ul>\n<ul>\n<li>Collaborate with Security, Authentication, Database, and Platform teams on authorization stack initiatives, aligning designs and implementation plans.</li>\n</ul>\n<ul>\n<li>Solve complex performance challenges in authorization, including query optimization, caching strategies, and database decomposition, with a focus on PostgreSQL.</li>\n</ul>\n<ul>\n<li>Design and evolve authorization systems that work across multiple deployment models and multi-tenant architectures while maintaining security and reliability.</li>\n</ul>\n<ul>\n<li>Drive improvements to authorization security, maintainability, and developer experience through code review, documentation, and technical leadership.</li>\n</ul>\n<ul>\n<li>Contribute to architectural decisions for authorization features with a long-term strategic view, balancing immediate needs with future scalability.</li>\n</ul>\n<ul>\n<li>Mentor and support other engineers in authorization patterns, policy-based access control, and secure coding practices in a fully remote, asynchronous environment.</li>\n</ul>\n<ul>\n<li>Professional experience building and maintaining production applications with Ruby on Rails or similar backend frameworks.</li>\n</ul>\n<ul>\n<li>Strong understanding of authorization models, including role-based access control, attribute-based access control, and fine-grained permission patterns.</li>\n</ul>\n<ul>\n<li>Experience designing and optimizing high-scale backend systems, including PostgreSQL performance tuning, query optimization, and effective caching strategies.</li>\n</ul>\n<ul>\n<li>Familiarity with or interest in policy-based authorization systems and modern policy languages such as Cedar or Rego.</li>\n</ul>\n<ul>\n<li>Understanding of core security principles, including threat modeling, least-privilege access, and zero-trust architectures.</li>\n</ul>\n<ul>\n<li>Experience working with distributed systems and service-to-service communication in a cloud or multi-tenant environment.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to own complex technical initiatives from design through production deployment in an asynchronous, remote setting.</li>\n</ul>\n<ul>\n<li>Strong collaboration and communication skills, with openness to learning and applying transferable skills from adjacent domains or technologies.</li>\n</ul>\n<p>We on the Authorization team at GitLab design, build, and maintain the permission systems that control access across the GitLab platform, ensuring they are secure, scalable, and flexible for customers of all sizes.</p>\n<p>We lead the ongoing evolution of our authorization architecture, with a focus on modern policy-as-code approaches, fine-grained access control, and support for initiatives like the evolving Auth stack.</p>\n<p>We collaborate asynchronously across time zones and partner closely with Authentication, Product Security, Database, and Security teams to align on identity, data modeling, and threat modeling needs while iterating safely on core platform capabilities.</p>\n<p>How GitLab Supports Full-Time Employees:</p>\n<ul>\n<li>Benefits to support your health, finances, and well-being</li>\n</ul>\n<ul>\n<li>Flexible Paid Time Off</li>\n</ul>\n<ul>\n<li>Team Member Resource Groups</li>\n</ul>\n<ul>\n<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>\n</ul>\n<ul>\n<li>Growth and Development Fund</li>\n</ul>\n<ul>\n<li>Parental leave</li>\n</ul>\n<ul>\n<li>Home office support</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4920db00-eb9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8457315002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ruby on Rails","PostgreSQL","Authorization models","Policy-based access control","Fine-grained permission patterns","Distributed systems","Service-to-service communication","Cloud or multi-tenant environment"],"x-skills-preferred":["Cedar or Rego policy languages","PostgreSQL performance tuning","Query optimization","Effective caching strategies"],"datePosted":"2026-04-18T15:52:52.909Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby on Rails, PostgreSQL, Authorization models, Policy-based access control, Fine-grained permission patterns, Distributed systems, Service-to-service communication, Cloud or multi-tenant environment, Cedar or Rego policy languages, PostgreSQL performance tuning, Query optimization, Effective caching strategies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_be766cd7-8e2"},"title":"Staff Software Engineer, Backend (Iasi)","description":"<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>\n<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_be766cd7-8e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5030292008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Backend Engineer","Database design","System architecture","ClickHouse","Elasticsearch","Python","Go","RESTful API design","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:36.898Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Iasi, Romania (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e1c6866e-f9e"},"title":"Staff Software Engineer, Backend (Cluj)","description":"<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e1c6866e-f9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5102480008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","ClickHouse","Elasticsearch","Python","Go","RESTful API design and development","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:06.437Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj, Romania (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fa9a54d7-549"},"title":"Senior Site Reliability Engineer, Data Infrastructure","description":"<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>\n<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>\n<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>\n<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>\n<p>About the role:</p>\n<ul>\n<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>\n<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>\n<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>\n<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>\n<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>\n<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>\n<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>\n<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>\n<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>\n<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>\n<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>\n<li>Background in building internal developer platforms or self-service infrastructure</li>\n</ul>\n<p>Wondering if you’re a good fit?</p>\n<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>\n<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>\n<ul>\n<li>You love building highly reliable systems that operate at scale</li>\n<li>You’re curious about how to continuously improve system resilience, security, and operations</li>\n<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>\n<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>\n<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>\n<p>Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>\n<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>\n<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>\n<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information.</p>\n<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>\n<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>\n<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>\n<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>\n<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fa9a54d7-549","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4671535006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Kubernetes","containerized software services","cluster design","operations","troubleshooting","CI/CD systems","Argo CD","GitHub Actions","production systems","high availability","incident response","SLI/SLO/SLA definition","error budgets","postmortems","geo-replicated","multi-region","active-active systems","traffic routing","failover strategies","data consistency tradeoffs","observability components","metrics","logging","tracing","Prometheus","Grafana","OpenTelemetry","infrastructure as code","Helm","Terraform","Pulumi","automated environment provisioning","system performance tuning","capacity planning","resource optimization","distributed systems","security best practices","cloud-native environments","secrets management","network policies","vulnerability scanning"],"x-skills-preferred":["Spark","Airflow","Kafka","Flink","service mesh technologies","Istio","Linkerd","regulated environments","compliance frameworks","GDPR","SOC 2","HIPAA","SOX","internal developer platforms","self-service infrastructure"],"datePosted":"2026-04-18T15:51:59.035Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f94dea6d-70a"},"title":"Distributed Systems Engineer - Data Platform - Analytical Database Platform","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>About Role</p>\n<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>\n<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>\n<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>\n<ul>\n<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>\n<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>\n<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>\n<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>\n<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>\n</ul>\n<p>Key qualifications:</p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems, and databases.</li>\n<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>\n<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>\n<li>Experience with ClickHouse is a plus.</li>\n<li>Experience with SALT or Terraform is a plus.</li>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f94dea6d-70a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/4886734","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems","databases","software development","Golang","python","C++","SQL","database design","optimization","performance tuning","algorithms","data structures","concurrency","ClickHouse","SALT","Terraform","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:34.743Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4b2edfb8-1c2"},"title":"Senior Software Engineer, Client Platform","description":"<p>We&#39;re looking for a Senior Software Engineer to join our Builder Experience (BIX) team. As a key member of our platform team, you&#39;ll be responsible for designing and implementing the foundations that every product engineer builds on top of. This includes the design system, core UI frameworks, client performance, state management patterns, continuous integration, and the libraries and tooling that keep our codebase healthy and our engineers productive.</p>\n<p>You&#39;ll be working closely with our Design team to evolve and scale our component library, ensuring it&#39;s accessible, composable, and well-documented. You&#39;ll also be responsible for profiling, diagnosing, and fixing client-side performance bottlenecks, establishing performance budgets, and building dashboards to keep the team honest.</p>\n<p>As a force multiplier, you&#39;ll act as a coach and enablement specialist, helping product teams adopt improvements and level up their craft. You&#39;ll write playbooks and docs, deliver tech talks, pair with product engineers, and create local tooling to improve developer speed and quality.</p>\n<p>In this role, you&#39;ll have the opportunity to work on a wide range of challenging projects, from performance optimization to design system evolution. You&#39;ll be part of a flat organizational structure, where everyone is valued and empowered to contribute. And, as a remote-friendly company, you&#39;ll have the flexibility to work from anywhere, with opportunities for in-person collaboration when needed.</p>\n<p>If you&#39;re passionate about frontend platform work, enjoy making an entire engineering organization faster and more effective, and are excited about the prospect of joining a dynamic and growing company, we&#39;d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4b2edfb8-1c2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Descript","sameAs":"https://descript.com/","logo":"https://logos.yubhub.co/descript.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/descript/jobs/7668317003","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$195,000–$250,000/year","x-skills-required":["React","Modern React ecosystem (hooks, concurrent features, Suspense)","Client-side performance (profiling tools, rendering optimization, bundle analysis, runtime performance tuning)","TypeScript","Modern frontend build tooling","State management approaches in large React applications","Mentoring and guiding other engineers"],"x-skills-preferred":["Experience working on tooling in a monorepo","Background in accessibility (WCAG, ARIA patterns) and inclusive component design","Familiarity with CI/CD optimization for frontend builds and test pipelines","Experience with Electron or desktop web-hybrid applications","Contributions to open-source design systems, React libraries, or developer tooling"],"datePosted":"2026-04-18T15:51:03.743Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Modern React ecosystem (hooks, concurrent features, Suspense), Client-side performance (profiling tools, rendering optimization, bundle analysis, runtime performance tuning), TypeScript, Modern frontend build tooling, State management approaches in large React applications, Mentoring and guiding other engineers, Experience working on tooling in a monorepo, Background in accessibility (WCAG, ARIA patterns) and inclusive component design, Familiarity with CI/CD optimization for frontend builds and test pipelines, Experience with Electron or desktop web-hybrid applications, Contributions to open-source design systems, React libraries, or developer tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":195000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a14533c3-732"},"title":"Senior Engineer, Cilium CNI & Cloud Networking","description":"<p>Network Services Team</p>\n<p>The Network Services team builds and operates the foundational networking that powers CoreWeave&#39;s Kubernetes platforms at cloud scale. The team is responsible for container networking, connectivity, and network services that support large-scale, GPU-driven workloads across regions and environments. They focus on scalability, reliability, security, and performance while delivering intuitive platforms for internal teams and customers.</p>\n<p>About the Role</p>\n<p>As a Senior Engineer focused on our Cilium-based CNI, you will design, build, and operate the container networking layer that underpins CoreWeave&#39;s Kubernetes platforms. Day to day, you will work on evolving our CNI stack to support large, high-density GPU clusters with demanding throughput and latency requirements. You will partner closely with Kubernetes, Infrastructure, and Network Services engineers to ensure the platform is highly available, observable, and secure. This role spans architecture, implementation, and operations, with ownership from prototype through production. You will also help shape how our networking platform scales for future growth.</p>\n<p>Who You Are</p>\n<ul>\n<li>5+ years of experience as a Software Engineer or Systems Engineer working on cloud infrastructure or large-scale distributed systems.</li>\n<li>Hands-on production experience with Cilium CNI (or equivalent advanced CNIs), including cluster configuration and lifecycle management.</li>\n<li>Strong understanding of Cilium&#39;s eBPF datapath, policy model, and load-balancing mechanisms.</li>\n<li>Deep knowledge of cloud networking concepts, including VPCs, subnets, routing, security groups/ACLs, NAT, and ingress/egress architectures.</li>\n<li>Experience designing multi-tenant network architectures with strong isolation and security.</li>\n<li>Solid grounding in TCP/IP, dynamic routing (e.g., BGP), ECMP, MTU/fragmentation, and overlay/underlay networking (VXLAN, Geneve, encapsulation).</li>\n<li>Experience with network observability and troubleshooting across L3–L7.</li>\n<li>Proficiency in at least one systems language such as Golang or C/C++.</li>\n<li>Experience working in modern CI/CD environments.</li>\n<li>Experience operating Kubernetes at scale, including cluster lifecycle management and debugging networking issues across pods, nodes, and external services.</li>\n<li>Demonstrated ownership of complex systems end-to-end.</li>\n</ul>\n<p>Preferred</p>\n<ul>\n<li>Experience operating cloud-scale network services across tens of thousands of nodes and multiple regions.</li>\n<li>Contributions to Cilium, Kubernetes, or related open-source networking projects.</li>\n<li>Experience with eBPF development and performance tuning.</li>\n<li>Experience building Kubernetes operators or controllers.</li>\n<li>Familiarity with service meshes, multi-cluster networking, or cluster mesh solutions.</li>\n<li>Experience in GPU-heavy, HPC, or other performance-sensitive environments.</li>\n</ul>\n<p>Wondering if you’re a good fit?</p>\n<p>We believe in investing in our people and value candidates who bring diverse experiences , even if you’re not a 100% match on paper. If some of this sounds like you, we’d love to talk.</p>\n<ul>\n<li>You love solving complex distributed systems and networking challenges at scale.</li>\n<li>You’re curious about cloud-native networking, eBPF, and Kubernetes internals.</li>\n<li>You’re an expert in building reliable, scalable infrastructure that runs in production.</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance</li>\n<li>100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>\n<p>Export Control Compliance</p>\n<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a14533c3-732","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4653971006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Cilium CNI","cloud infrastructure","large-scale distributed systems","container networking","connectivity","network services","Kubernetes","eBPF datapath","policy model","load-balancing mechanisms","cloud networking concepts","VPCs","subnets","routing","security groups/ACLs","NAT","ingress/egress architectures","TCP/IP","dynamic routing","ECMP","MTU/fragmentation","overlay/underlay networking","Golang","C/C++","CI/CD environments","Kubernetes at scale","cluster lifecycle management","debugging networking issues"],"x-skills-preferred":["cloud-scale network services","Cilium","eBPF development","performance tuning","Kubernetes operators","controllers","service meshes","multi-cluster networking","cluster mesh solutions","GPU-heavy","HPC","performance-sensitive environments"],"datePosted":"2026-04-18T15:47:58.336Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cilium CNI, cloud infrastructure, large-scale distributed systems, container networking, connectivity, network services, Kubernetes, eBPF datapath, policy model, load-balancing mechanisms, cloud networking concepts, VPCs, subnets, routing, security groups/ACLs, NAT, ingress/egress architectures, TCP/IP, dynamic routing, ECMP, MTU/fragmentation, overlay/underlay networking, Golang, C/C++, CI/CD environments, Kubernetes at scale, cluster lifecycle management, debugging networking issues, cloud-scale network services, Cilium, eBPF development, performance tuning, Kubernetes operators, controllers, service meshes, multi-cluster networking, cluster mesh solutions, GPU-heavy, HPC, performance-sensitive environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_456f029f-2e2"},"title":"Principal Software Engineer","description":"<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>\n<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>\n<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>\n<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>\n<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>\n<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>\n<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>\n<li>Implement CDC pipelines and calculated field processing for derived data views.</li>\n<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>\n<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>\n<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>\n<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>\n<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>\n<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>\n<li>10+ years of software engineering experience building large-scale data platforms.</li>\n<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>\n<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>\n<li>Proven experience with GCP or AWS and cloud-native architectures.</li>\n<li>Experience with streaming/real-time data processing technologies.</li>\n<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>\n<li>Hands-on experience with Google Cloud Platform services.</li>\n<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>\n<li>Experience solving data freshness and consistency challenges in distributed systems.</li>\n<li>Background in building observability and monitoring solutions for data platforms.</li>\n<li>Familiarity with metadata management and schema evolution.</li>\n<li>Experience with Kubernetes for deploying data services.</li>\n<li>SQL query optimization and performance tuning expertise.</li>\n<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>\n<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>\n<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>\n<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>\n<li>Passion for building reliable, observable, and maintainable systems.</li>\n<li>Experience promoting diverse, inclusive work environments.</li>\n</ul>\n<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>\n<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>\n<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>\n<p>$163,800-$257,400 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_456f029f-2e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8243004002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,800-$257,400 USD","x-skills-required":["Java 8+","Scala","Kotlin","GoLang","GCP","AWS","cloud-native architectures","streaming/real-time data processing technologies","distributed NoSQL databases","data warehousing systems","metadata management","schema evolution","Kubernetes","SQL query optimization","performance tuning","GraphQL APIs","federated or metadata-driven schema generation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:17.604Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-US-CA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163800,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ac0b2f4-6c9"},"title":"Member of Technical Staff - Imagine Product","description":"<p><strong>About the Role</strong></p>\n<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>\n<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>\n<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>\n<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>\n<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>\n<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>\n<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>\n<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>\n<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>\n<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>\n<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>\n<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>\n<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>\n<li>Background in AI-driven consumer products or media generation technologies.</li>\n<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ac0b2f4-6c9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://xAI.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052027007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Python","Rust","clean, efficient, maintainable, and scalable code","large-scale data infrastructure and pipelines","multi-modal or media-heavy AI applications","production-grade solutions","quality and uptime"],"x-skills-preferred":["real-time systems","inference serving","multi-modal data processing at scale","distributed systems","containerisation","observability tools","performance tuning for AI workloads","AI-driven consumer products","media generation technologies"],"datePosted":"2026-04-18T15:41:51.975Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51758515-c12"},"title":"Member of Technical Staff","description":"<p>We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment.</p>\n<p>This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure.</p>\n<p>The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime,including close partnership with facility operations to address physical infrastructure impacts.</p>\n<p>In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities.</p>\n<p>By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.</li>\n</ul>\n<ul>\n<li>Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers,open to innovative stacks beyond traditional ones like ELK.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams,including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management),to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).</li>\n</ul>\n<ul>\n<li>Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.</li>\n</ul>\n<ul>\n<li>Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.</li>\n</ul>\n<ul>\n<li>Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.</li>\n</ul>\n<ul>\n<li>Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.</li>\n</ul>\n<ul>\n<li>Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).</li>\n</ul>\n<ul>\n<li>5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.</li>\n</ul>\n<ul>\n<li>Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.</li>\n</ul>\n<ul>\n<li>Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>\n</ul>\n<ul>\n<li>Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>\n</ul>\n<ul>\n<li>Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.</li>\n</ul>\n<ul>\n<li>Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.</li>\n</ul>\n<ul>\n<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>\n</ul>\n<ul>\n<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/ML training infrastructure environments with multi-data center setups.</li>\n</ul>\n<ul>\n<li>Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.</li>\n</ul>\n<ul>\n<li>Proficiency in Rust for systems programming and performance-critical components.</li>\n</ul>\n<ul>\n<li>Direct experience integrating software reliability tools with physical data center infrastructure.</li>\n</ul>\n<ul>\n<li>Experience with observability tools and practices, such as metrics collection, logging, tracing, and dashboards.</li>\n</ul>\n<ul>\n<li>Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>\n</ul>\n<ul>\n<li>Experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>\n</ul>\n<ul>\n<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>\n</ul>\n<ul>\n<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51758515-c12","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5044403007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Rust","Linux systems administration","performance tuning","kernel-level understanding","scripting/automation","containerization","orchestration","observability","metrics collection","logging","tracing","dashboards","networking fundamentals","TCP/IP","routing","redundancy","DNS"],"x-skills-preferred":["Kubernetes","Docker","Grafana","Prometheus","ELK","DevOps","SRE","infrastructure engineering","systems engineering"],"datePosted":"2026-04-18T15:39:31.440Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Memphis, TN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Linux systems administration, performance tuning, kernel-level understanding, scripting/automation, containerization, orchestration, observability, metrics collection, logging, tracing, dashboards, networking fundamentals, TCP/IP, routing, redundancy, DNS, Kubernetes, Docker, Grafana, Prometheus, ELK, DevOps, SRE, infrastructure engineering, systems engineering"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_42187d42-78e"},"title":"Staff Engineer (Backend, DevOps, Infrastructure)","description":"<p>About Zuma</p>\n<p>Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property managers alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.</p>\n<p>Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We&#39;re on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.</p>\n<p>As a Staff Engineer, you will:</p>\n<p>Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.</p>\n<p>Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.</p>\n<p>Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.</p>\n<p>You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you&#39;ll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.</p>\n<p>You&#39;ll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding.</p>\n<p>As our first US-based engineer, you&#39;ll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform.</p>\n<p>You&#39;ll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems.</p>\n<p>Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.</p>\n<p>We&#39;re looking for a Staff Engineer to help us bring that future to life. This is not just another dev role. You&#39;ll be hands-on shaping the technical DNA of Zuma. You&#39;ll architect critical systems, tame legacy code, build net-new AI-powered experiences, and lay down the patterns future engineers will inherit.</p>\n<p>If you&#39;re obsessed with building real products people use, especially products powered by LLMs, this might be your playground.</p>\n<p><strong>Why This Could Be Your Dream Role</strong></p>\n<ul>\n<li>You&#39;ll work directly with cutting-edge LLM technology in a real-world application</li>\n<li>You want to work at a company where customers feel your impact every day</li>\n<li>You&#39;ll architect AI-powered systems that are transforming the real estate industry</li>\n<li>You&#39;ll have autonomy to design and implement innovative technical solutions</li>\n<li>Your work will directly impact thousands of apartment communities and millions of renters</li>\n<li>You&#39;ll receive significant equity in a venture-backed company with strong traction</li>\n<li>As we scale, your role and influence will grow with the company</li>\n</ul>\n<p><strong>Why You Might Want to Think Twice</strong></p>\n<ul>\n<li>This is a demanding role that will often require extended hours and deep commitment</li>\n<li>As a founding team member, you&#39;ll need to wear multiple hats and step outside your comfort zone</li>\n<li>You&#39;ll need to make thoughtful tradeoffs between innovation and immediate needs</li>\n<li>You&#39;ll interact directly with customers to understand their needs and occasionally travel to their offices</li>\n<li>We&#39;re a startup - priorities can shift rapidly as we respond to market opportunities and customer needs</li>\n<li>If you&#39;re not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn&#39;t the job for you</li>\n</ul>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation</li>\n<li>Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands</li>\n<li>Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products</li>\n<li>Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability</li>\n<li>Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform</li>\n<li>Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions</li>\n<li>Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</li>\n</ul>\n<p><strong>Your Experience Looks Like</strong></p>\n<ul>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field</li>\n<li>5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability</li>\n<li>Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services</li>\n<li>Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)</li>\n<li>Hands-on experience with database design, performance tuning, and scaling high-throughput data systems</li>\n<li>Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices</li>\n<li>Strong communication skills and ability to work effectively in a distributed, fast-paced environment</li>\n<li>Comfortable operating in early-stage, high-ownership environments with evolving requirements</li>\n<li>Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure</li>\n<li>Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</li>\n</ul>\n<p><strong>Guiding Principles</strong></p>\n<ul>\n<li>Customer‑First Outcomes</li>\n</ul>\n<p>Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.</p>\n<ul>\n<li>Bias for Simplicity</li>\n</ul>\n<p>We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.</p>\n<ul>\n<li>Quality Is a Gate, Not an After‑Thought</li>\n</ul>\n<p>Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.</p>\n<ul>\n<li>Data‑Driven Choices</li>\n</ul>\n<p>We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.</p>\n<ul>\n<li>Transparency &amp; Written Culture</li>\n</ul>\n<p>Good ideas don’t expire in Zoom. We operate in public i</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_42187d42-78e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zuma","sameAs":"https://www.zuma.com/","logo":"https://logos.yubhub.co/zuma.com.png"},"x-apply-url":"https://jobs.lever.co/getzuma/800b8d69-b1e0-4524-a0a7-a5cec8b337b5","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Node.js","API design","system architecture","cloud-based services","cloud infrastructure","Infrastructure as Code","database design","performance tuning","scaling high-throughput data systems","CI/CD pipelines","automated testing","modern DevOps practices"],"x-skills-preferred":["React","TypeScript","LLM-based systems","AI infrastructure","agentic AI workflows"],"datePosted":"2026-04-17T13:12:22.878Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c20d7221-4b5"},"title":"Support Engineer","description":"<p>As a Support Engineer at Zuma, you&#39;ll be a bridge between our customers, engineering team, and product vision. You&#39;ll ensure new customers onboard smoothly, integrations run reliably, and support operations scale as we grow. This is a hands-on role for someone who loves problem-solving, can dive into APIs and databases, and takes pride in clear documentation and communication.</p>\n<p>You&#39;ll help property managers succeed with our AI platform while also driving continuous improvements in our internal tools and processes.</p>\n<p>Responsibilities:\nLead critical system rewrites to transform our architecture into a highly scalable, resilient foundation\nOwn the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands\nBuild and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products\nSet up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability\nEstablish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform\nDrive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions\nMentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</p>\n<p>Your Experience Looks Like:\nBachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field\n3+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability\nProven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services\nExperience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)\nHands-on experience with database design, performance tuning, and scaling high-throughput data systems\nFamiliarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices\nStrong communication skills and ability to work effectively in a distributed, fast-paced environment\nComfortable operating in early-stage, high-ownership environments with evolving requirements\nBonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure\nBonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</p>\n<p>Guiding Principles:\nCustomer‑First Outcomes\nBias for Simplicity\nQuality Is a Gate, Not an After‑Thought\nData‑Driven Choices\nTransparency &amp; Written Culture</p>\n<p>Other Benefits:\nGreat health insurance, dental, and vision\nGym and workspace stipends\nComputer and workspace enhancements\nUnlimited PTO\nOpportunity to play a critical role in building the foundations of the company and Engineering culture</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c20d7221-4b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zuma","sameAs":"https://www.zuma.com","logo":"https://logos.yubhub.co/zuma.com.png"},"x-apply-url":"https://jobs.lever.co/getzuma/da4d2130-954e-4b29-a9ef-3926b9bedba6","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Node.js","API design","system architecture","cloud-based services","cloud infrastructure","Infrastructure as Code","database design","performance tuning","scaling high-throughput data systems","CI/CD pipelines","automated testing","modern DevOps practices"],"x-skills-preferred":["React","TypeScript","LLM-based systems","AI infrastructure","agentic AI workflows"],"datePosted":"2026-04-17T13:11:53.316Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US and Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a40d099b-db6"},"title":"Solutions Engineer","description":"<p>We&#39;re looking for early members of our Sales team that can form deep partnerships with our prospects and customers to help them adopt and succeed on the next generation of database infrastructure.</p>\n<p>As a Solutions Engineer, you will partner with Sales and Customer Engineering throughout the pre-sales and post-sales journey as the technical expert helping customers solve their most challenging database problems. You will lead technical discovery to match customers&#39; business and technical objectives with PlanetScale&#39;s offerings. You will design and execute proof of value timelines that deliver on agreed-upon business outcomes and success criteria. You will design database migration strategies and work hands-on with customers to execute migrations to PlanetScale&#39;s PostgreSQL and Vitess platforms. You will assess workloads, analyze performance requirements, and recommend architecture, sizing, and optimization strategies. You will build tools, scripts, and automation that accelerate migrations and improve customer onboarding. You will create educational content including documentation, guides, blog posts, workshops, and videos. You will collaborate with Product and Engineering teams to advocate for customer needs and shape the platform.</p>\n<p>You have deep expertise in database systems including replication, high availability, sharding, performance tuning, and migration strategies. You are equally comfortable presenting architecture designs to executives and writing scripts to automate migration tasks. You thrive in customer-facing situations and translate technical concepts into business value for diverse audiences. You are self-motivated and can manage multiple engagements simultaneously with minimal oversight. You enjoy creating content and sharing knowledge through various formats. You are comfortable with occasional travelmaxcdn&lt; 20%.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a40d099b-db6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"PlanetScale","sameAs":"https://www.planetscale.com/","logo":"https://logos.yubhub.co/planetscale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/planetscale/jobs/4052805009","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 - $250,000 USD","x-skills-required":["MySQL","PostgreSQL","Vitess","database migration","performance tuning","troubleshooting","cloud computing","scripting","automation"],"x-skills-preferred":["AWS Database Migration Service","logical replication tools","Kubernetes","cloud-native architectures","infrastructure-as-code tools","open-source projects","public speaking"],"datePosted":"2026-04-17T12:52:31.090Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - EMEA, Remote - NA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"MySQL, PostgreSQL, Vitess, database migration, performance tuning, troubleshooting, cloud computing, scripting, automation, AWS Database Migration Service, logical replication tools, Kubernetes, cloud-native architectures, infrastructure-as-code tools, open-source projects, public speaking","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bf2f7e1a-d9d"},"title":"Enterprise Support Engineer","description":"<p>Job Title: Enterprise Support Engineer</p>\n<p>We are seeking an experienced Enterprise Support Engineer to join our core engineering team. As an Enterprise Support Engineer, you will advise and handle support requests from enterprise customers on the PlanetScale platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Advise and handle support requests from enterprise customers on the PlanetScale platform.</li>\n<li>Become a customer-facing subject-matter expert for enterprise customers on the PlanetScale platform.</li>\n<li>Identify product gaps in a customer-specific context and work with Technical Account Management, Engineering and Sales Engineering teams to prioritize and escalate them.</li>\n<li>Be part of an on-call rotation for high-priority issues.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Experience supporting production databases and applications, preferably at scale.</li>\n<li>Experience with database internals and performance tuning, specifically for PostgreSQL and MySQL databases.</li>\n<li>Working knowledge of Kubernetes.</li>\n<li>Strong ability to communicate and deal directly with customers, whether in email, Slack, video conference, or in person.</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Knowledge of common application deployment platforms and frameworks, such as Python, Go, Node, PHP.</li>\n<li>Experience with cloud platforms (AWS, GCP, Azure).</li>\n<li>Knowledge of monitoring, observability, and debugging tools.</li>\n<li>Contributions to open-source projects, especially in the database or infrastructure space.</li>\n</ul>\n<p>Why PlanetScale?</p>\n<p>We&#39;re redefining how high-growth companies manage data at scale,and we work with some of the most exciting brands in gaming, consumer tech, and B2B SaaS. As a Software Engineer, you&#39;ll be at the core of building the platform that powers world-class apps used by hundreds of millions of users worldwide. PlanetScale is a profitable company with a philosophy centered around building small teams of p99 individuals and is recognized as one of the fastest-growing companies in America.</p>\n<p>Total Compensation and Pay Transparency</p>\n<p>An employee&#39;s total compensation consists of base salary + variable comp where appropriate + benefits + equity. A member of our Talent Acquisition team will be happy to answer any further questions when we engage with you to begin the interview process.</p>\n<p>Salary Range: US $120,000 - $200,000</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bf2f7e1a-d9d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"PlanetScale","sameAs":"https://www.planetscale.com/","logo":"https://logos.yubhub.co/planetscale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/planetscale/jobs/4009926009","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"US $120,000 - $200,000","x-skills-required":["PostgreSQL","MySQL","Kubernetes","database internals","performance tuning"],"x-skills-preferred":["Python","Go","Node","PHP","cloud platforms","monitoring","observability","debugging tools"],"datePosted":"2026-04-17T12:52:08.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - NA, APAC, EMEA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Kubernetes, database internals, performance tuning, Python, Go, Node, PHP, cloud platforms, monitoring, observability, debugging tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_59d754a0-050"},"title":"Full Stack Software Engineer","description":"<p>About Cyngn</p>\n<p>Based in Mountain View, CA, Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>\n<p>We are looking for innovative, motivated, and experienced leaders to join us and move this field forward. If you like to build, tinker, and create with a team of trusted and passionate colleagues, then Cyngn is the place for you.</p>\n<p>Key reasons to join Cyngn:</p>\n<ul>\n<li><p>We are small and big. With under 100 employees, Cyngn operates with the energy of a startup. On the other hand, we’re publicly traded. This means our employees not only work in close-knit teams with mentorship from company leaders,they also get access to the liquidity of our publicly-traded equity.</p>\n</li>\n<li><p>We build today and deploy tomorrow. Our autonomous vehicles aren’t just test concepts,they’re deployed to real clients right now. That means your work will have a tangible, visible impact.</p>\n</li>\n<li><p>We aren’t robots. We just develop them. We’re a welcoming, diverse team of sharp thinkers and kind humans. Collaboration and trust drive our creative environment. At Cyngn, everyone’s perspective matters,and that’s what powers our innovation.</p>\n</li>\n</ul>\n<p>About this role:</p>\n<p>Cyngn is building a cloud platform that helps customers monitor, manage, and optimize fleets of autonomous industrial vehicles in real time. As a Full Stack Engineer (Mid–Senior), you’ll ship features end-to-end,from Python backend services to TypeScript/JavaScript frontends,on a small, high-impact team.</p>\n<p>Responsibilities</p>\n<ul>\n<li><p>Build customer-facing web experiences: fleet dashboards, live views/maps, alerts, admin tools, and reporting using TypeScript/JavaScript.</p>\n</li>\n<li><p>Build and evolve backend services in Python that power fleet operations, integrations, data ingestion, and analytics.</p>\n</li>\n<li><p>Design and implement reliable APIs (REST and/or gRPC) that are well-documented and easy to integrate with customer systems.</p>\n</li>\n<li><p>Deliver real-time features (live vehicle state, events, notifications, operator workflows) using WebSockets and event-driven patterns.</p>\n</li>\n<li><p>Support “physical AI” workflows by connecting cloud software to autonomy/robotics systems,telemetry pipelines, command-and-control surfaces, and operational tooling that interacts with vehicles in the real world.</p>\n</li>\n<li><p>Use modern AI tools and agents to move faster and raise quality (and help build customer-facing copilot experiences where it makes sense).</p>\n</li>\n<li><p>Contribute to digital-twin simulation + validation loops (where applicable): support workflows that use simulation to test behaviors, validate releases, and reproduce field issues.</p>\n</li>\n<li><p>Raise engineering quality through testing, code reviews, observability, and pragmatic reliability/performance improvements.</p>\n</li>\n<li><p>Own meaningful chunks of the product: shape solutions, make tradeoffs, and drive work to completion.</p>\n</li>\n</ul>\n<p>Qualifications</p>\n<ul>\n<li><p>2–4+ years of professional software engineering experience.</p>\n</li>\n<li><p>Strong production experience with:</p>\n<ul>\n<li><p>Python (backend services, APIs, data workflows)</p>\n</li>\n<li><p>TypeScript or JavaScript (frontend)</p>\n</li>\n<li><p>You’ve shipped and supported user-facing web applications (not just internal tools).</p>\n</li>\n<li><p>You’re comfortable building APIs and working with databases (SQL preferred; NoSQL is a plus).</p>\n</li>\n<li><p>You communicate clearly, take ownership, and bring a low-ego, collaborative approach.</p>\n</li>\n<li><p>You care about software that’s reliable in production, not just “works locally.”</p>\n</li>\n</ul>\n</li>\n</ul>\n<p>Bonus Qualifications</p>\n<ul>\n<li><p>Real-time systems experience: WebSockets, SSE, streaming updates, pub/sub.</p>\n</li>\n<li><p>Event-driven systems / messaging: Kafka, RabbitMQ, Pulsar, etc.</p>\n</li>\n<li><p>Experience with telemetry-heavy or operational products: IoT, robotics, autonomy, fleet/dispatch, industrial software.</p>\n</li>\n<li><p>Experience building analytics features: reporting, aggregations, operational metrics, customer-facing insights.</p>\n</li>\n<li><p>Familiarity with scaling patterns: caching, background jobs, rate limiting, performance tuning.</p>\n</li>\n<li><p>Strong habits using AI coding assistants/agents responsibly (verification, testing, high-signal reviews).</p>\n</li>\n<li><p>Exposure to physical AI simulation tooling (e.g., NVIDIA Omniverse / Isaac Sim) or similar environments.</p>\n</li>\n</ul>\n<p>Benefits &amp; Perks</p>\n<ul>\n<li><p>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</p>\n</li>\n<li><p>Life, Short-term and long-term disability insurance (Cyngn funds 100% of premiums)</p>\n</li>\n<li><p>Company 401(k)</p>\n</li>\n<li><p>Commuter Benefits</p>\n</li>\n<li><p>Flexible vacation policy</p>\n</li>\n<li><p>Remote or hybrid work opportunities</p>\n</li>\n<li><p>Sabbatical leave opportunity after 5 years with the company</p>\n</li>\n<li><p>Paid Parental Leave</p>\n</li>\n<li><p>Daily lunches for in-office employees</p>\n</li>\n<li><p>Monthly meal and tech allowances for remote employees</p>\n</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_59d754a0-050","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cyngn","sameAs":"https://www.cyngn.com/","logo":"https://logos.yubhub.co/cyngn.com.png"},"x-apply-url":"https://jobs.lever.co/cyngn/ee7518e1-7f77-4655-b07d-ea968ec82127","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"USD 153,000-171,000 per-year-salary","x-skills-required":["Python","TypeScript","JavaScript","WebSockets","gRPC","SQL","NoSQL","APIs","web development","real-time systems","event-driven systems","messaging","IoT","robotics","autonomy","fleet/dispatch","industrial software","analytics features","reporting","aggregations","operational metrics","customer-facing insights","scaling patterns","caching","background jobs","rate limiting","performance tuning","AI coding assistants/agents","physical AI simulation tooling"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:29:08.151Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, JavaScript, WebSockets, gRPC, SQL, NoSQL, APIs, web development, real-time systems, event-driven systems, messaging, IoT, robotics, autonomy, fleet/dispatch, industrial software, analytics features, reporting, aggregations, operational metrics, customer-facing insights, scaling patterns, caching, background jobs, rate limiting, performance tuning, AI coding assistants/agents, physical AI simulation tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153000,"maxValue":171000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e231d72c-b82"},"title":"Senior Software Engineer, Backend (Berlin)","description":"<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>\n<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>Paid parental leave to support you and your family</li>\n<li>Monthly Health &amp; Wellness allowance</li>\n<li>Work from home office stipend to help you succeed in a remote environment</li>\n<li>Lunch reimbursement for in-office employees</li>\n<li>PTO: 28 days in Germany</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e231d72c-b82","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4668107008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","ClickHouse","Elasticsearch","Python","Go","RESTful API design and development","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:26:29.315Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4075c787-328"},"title":"Member of Technical Staff - Large Scale Data Infrastructure","description":"<p>We&#39;re looking for infrastructure engineers to work at peta-to-exabyte scale. You&#39;ll build data systems behind the largest training runs on thousands of GPUs, where fixing one bottleneck lets researchers train the next breakthrough model.</p>\n<p><strong>What You&#39;ll Work On:</strong></p>\n<ul>\n<li>Scalable data loaders for training runs across thousands of GPUs</li>\n<li>Efficient storage and retrieval systems for petabyte-scale datasets</li>\n<li>Multi-cloud object storage abstraction</li>\n<li>Execute large-scale data migrations across storage systems and providers</li>\n<li>Debug and resolve performance bottlenecks in distributed data loading</li>\n</ul>\n<p><strong>Technical Focus:</strong></p>\n<ul>\n<li>Python, PyTorch DataLoader internals</li>\n<li>Object storage (e.g. S3, Azure Blob, GCS)</li>\n<li>Parquet for metadata</li>\n<li>Video: ffmpeg, PyAV, codec fundamentals</li>\n</ul>\n<p><strong>What We&#39;re Looking For:</strong></p>\n<ul>\n<li>Built and operated data pipelines at petabyte scale</li>\n<li>Optimized data loading</li>\n<li>Worked with petabyte-scale video and image datasets</li>\n<li>Written processing jobs operating on millions of files</li>\n<li>Debugged distributed system bottlenecks across large fleets of machines</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Experience streaming dataset formats (e.g. WebDataset)</li>\n<li>Video codec internals and frame-accurate seeking</li>\n<li>Distributed systems experience</li>\n<li>Slurm and Kubernetes for job orchestration</li>\n<li>Experience with object storage performance tuning across providers</li>\n</ul>\n<p><strong>How We Work Together:</strong></p>\n<ul>\n<li>We&#39;re a distributed team with real offices that people actually use. Depending on your role, you&#39;ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We&#39;ll cover reasonable travel costs to make this possible. We think in-person time matters, and we&#39;ve structured things to make it accessible to all. We&#39;ll discuss what this will look like for the role during our interview process.</li>\n</ul>\n<p><strong>Everything we do is grounded in four values:</strong></p>\n<ul>\n<li>Obsessed. We are a frontier research lab. The science has to be right, the understanding deep, the product beautiful.</li>\n<li>Low Ego. The work speaks. The best idea wins, no matter who said it. Credit is shared. Nobody is above any task.</li>\n<li>Bold. We take the ambitious bet. We ship, we do not wait for conditions to be perfect.</li>\n<li>Kind. People over politics. We treat each other with genuine warmth. Agency without empathy creates chaos.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4075c787-328","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Black Forest Labs","sameAs":"https://www.blackforestlabs.com/","logo":"https://logos.yubhub.co/blackforestlabs.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/blackforestlabs/jobs/5019171008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000–$300,000 USD + Equity","x-skills-required":["Python","PyTorch","Data Loader Internals","Object Storage","Parquet","Video","ffmpeg","PyAV","Codec Fundamentals"],"x-skills-preferred":["WebDataset","Distributed Systems","Slurm","Kubernetes","Object Storage Performance Tuning"],"datePosted":"2026-04-17T12:26:28.781Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Freiburg (Germany), San Francisco (USA)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, Data Loader Internals, Object Storage, Parquet, Video, ffmpeg, PyAV, Codec Fundamentals, WebDataset, Distributed Systems, Slurm, Kubernetes, Object Storage Performance Tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":300000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c80b6ac1-620"},"title":"Senior Software Engineer","description":"<p>We&#39;re looking for experienced distributed-systems engineers to join our Core Product team and advance the next generation of Alluxio&#39;s data-orchestration engine - the foundation for AI and analytics at global scale.</p>\n<p>As a Senior Software Engineer, you&#39;ll work on high-impact systems problems such as:</p>\n<ul>\n<li>Optimizing metadata management, caching, and replication across thousands of nodes.</li>\n<li>Designing concurrent, fault-tolerant services for multi-region and multi-cloud environments.</li>\n<li>Evolving Alluxio&#39;s storage abstraction and scheduling layer to support large-scale AI/ML data pipelines.</li>\n<li>Collaborating with internal product teams to push the limits of distributed I/O performance.</li>\n</ul>\n<p>This is a hands-on, architecture-plus-implementation role for engineers who love deep systems work and want visible impact in a small, senior, highly technical team.</p>\n<p><strong>What You&#39;ll Own</strong></p>\n<ul>\n<li>Cache and metadata enhancements - design and implement improvements to caching policies, eviction logic, and metadata scalability to increase performance and reliability.</li>\n<li>Data path optimization - refine I/O pipelines for S3/GCS/HDFS/Posix to reduce latency and improve throughput using concurrency and scheduling techniques.</li>\n<li>Distributed systems reliability - strengthen consistency, replication, and fault-tolerance mechanisms across large-scale clusters.</li>\n<li>Feature development and integration - collaborate with product and solution-engineering teams to deliver features that support AI and analytics workloads.</li>\n<li>Code quality and peer collaboration - participate in design reviews, provide constructive feedback, and ensure robust testing and observability in production systems.</li>\n</ul>\n<p><strong>What We&#39;re Looking For</strong></p>\n<ul>\n<li>Strong computer-science fundamentals and a passion for large-scale distributed systems.</li>\n<li>Professional experience developing in Java, C++, or Go.</li>\n<li>Practical knowledge of concurrency, replication, distributed coordination, and performance tuning.</li>\n<li>Experience with distributed storage, caching, or data-access layers (e.g., Spark, Presto, Hadoop, Kubernetes).</li>\n<li>Bachelor&#39;s or advanced degree in Computer Science or related technical field (or equivalent experience).</li>\n</ul>\n<p><strong>Why Alluxio?</strong></p>\n<ul>\n<li>Build infrastructure trusted by the world&#39;s largest AI and data-driven companies.</li>\n<li>Join a small, senior engineering team where your designs shape the product&#39;s evolution.</li>\n<li>Work directly with the original creators of open-source Alluxio.</li>\n<li>A culture of empathy, curiosity, and ownership - where engineers collaborate closely to solve hard problems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c80b6ac1-620","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Alluxio","sameAs":"https://alluxio.io","logo":"https://logos.yubhub.co/alluxio.io.png"},"x-apply-url":"https://jobs.lever.co/alluxio/1f58cf1a-9182-4f86-b51f-c5e7f3b9f938","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","C++","Go","Concurrency","Replication","Distributed Coordination","Performance Tuning","Distributed Storage","Caching","Data-Access Layers"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:22:59.144Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berkeley"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, C++, Go, Concurrency, Replication, Distributed Coordination, Performance Tuning, Distributed Storage, Caching, Data-Access Layers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3999ca5d-6fc"},"title":"Engineering Manager, Privy","description":"<p>About Privy</p>\n<p>Privy is a developer tooling company that empowers users to take control of their online presence. We&#39;re looking for an experienced Engineering Manager to lead and grow a team of Infrastructure engineers.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead and grow a high-performing team of Infrastructure engineers</li>\n<li>Drive the future vision of infrastructure alongside talented infrastructure engineers</li>\n<li>Hold the team accountable to excellence in quality, throughput, and performance</li>\n<li>Ensure the team is working on the right scope of work and projects, align decisions with business impact</li>\n<li>Fill gaps as a player-coach; review PRs, write and review design docs, investigate incidents</li>\n<li>Coach engineers towards growth and their career goals</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a talented team of engineers</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Deep ownership and a high-level perspective on driving overall business impact</li>\n<li>Performance-oriented mindset, with a high bar for quality and excellence</li>\n<li>Technical excellence to be able to independently evaluate quality and technical feedback</li>\n<li>High emotional maturity, insightfulness, and care</li>\n<li>Strong past experience as a manager and leader</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Experience with designing and operating systems supporting hundreds of millions of users</li>\n<li>Secure enclave platforms, like AWS Nitro Enclaves</li>\n<li>Observability, incident response, capacity planning, performance tuning, and infrastructure automation (IaC, CI/CD for infra)</li>\n<li>Background in building low-latency, high-throughput systems for trading or payment processing</li>\n<li>Any blend of public cloud/BYOC architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3999ca5d-6fc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Privy","sameAs":"https://privy.com","logo":"https://logos.yubhub.co/privy.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7729216","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["secure enclave platforms","observability","incident response","capacity planning","performance tuning","infrastructure automation","IaC","CI/CD for infra"],"x-skills-preferred":["designing and operating systems","low-latency, high-throughput systems","public cloud/BYOC architectures"],"datePosted":"2026-03-31T18:18:40.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"NYC-Privy"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"secure enclave platforms, observability, incident response, capacity planning, performance tuning, infrastructure automation, IaC, CI/CD for infra, designing and operating systems, low-latency, high-throughput systems, public cloud/BYOC architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f5765c8-227"},"title":"SAP Senior Transportation Management (TM) Technical Consultant","description":"<p>We are seeking an experienced SAP Senior Transportation Management (TM) Technical Consultant to lead technical design, development, and integration efforts within SAP TM implementations and support projects. The ideal candidate will possess strong expertise in SAP TM architecture, custom development, enhancements, integrations, and performance optimization across complex logistics environments.</p>\n<p>This role requires deep technical knowledge of SAP Transportation Management (TM), integration with SAP ERP/S4HANA systems, and strong experience in ABAP and SAP technical frameworks.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop, enhance and lead technical design and development activities for SAP TM implementations</li>\n<li>Act as a central sparring partner within the technical team</li>\n<li>Provide technical leadership and mentor junior consultants</li>\n<li>Control/improve quality of technical solutions within the technical team</li>\n<li>Develop custom objects using ABAP, OO-ABAP, BOPF, BADIs, Enhancements, User Exits</li>\n<li>Build and enhance Fiori/UI5 applications for TM processes</li>\n<li>Develop custom reports, forms (Adobe Forms/SmartForms), interfaces, and workflows</li>\n<li>Implement integration/interfaces of SAP TM with SAP ECC/ SAP/S4HANA, SAP eWM, SAP Event Management, external carrier systems via IDocs, RFC, Proxy, OData, REST, PI/PO, CPI</li>\n<li>Work with middleware technologies including SAP CPI and PI/PO</li>\n<li>Ensure seamless data exchange across logistics systems</li>\n<li>Perform performance optimization and troubleshooting</li>\n<li>Support cutover activities and hypercare</li>\n<li>Prepare technical documentation (FS/TS, design documents, test scripts)</li>\n<li>Participate in client workshops and requirement gathering sessions in close alignment with functional consultant</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>8+ years of SAP technical experience with at least 5+ years of hands-on experience in SAP TM technical development</li>\n<li>Minimum 2-3 full lifecycle SAP TM implementation projects</li>\n<li>Strong expertise in:</li>\n<li>ABAP, OO-ABAP</li>\n<li>BOPF framework</li>\n<li>BRF+</li>\n<li>PPF</li>\n<li>Enhancements &amp; BADIs</li>\n<li>Web Services / OData</li>\n<li>Strong debugging and performance tuning skills</li>\n<li>Strong communication and stakeholder management skills</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Start: by arrangement - always on the 1st and 15th of the month</li>\n<li>Working hours: full-time (40h); 27 vacation days</li>\n<li>Employment contract: Unlimited</li>\n<li>Line of work: Consulting</li>\n<li>Language skills: Fluency in written and spoken English, and German nice to have</li>\n<li>Flexibility &amp; willingness to travel</li>\n<li>Other: a valid work permit</li>\n</ul>\n<p><strong>About MHP</strong></p>\n<p>MHP is a technology and business partner that digitizes its customers&#39; processes and products, supporting them in their IT transformations along the entire value chain. The company serves more than 300 customers worldwide, including leading corporations and innovative medium-sized companies.</p>\n<p><strong>Culture</strong></p>\n<p>We are an ambitious IT consulting company with a strong and clear mission. We create digital futures with sustainable impact for the world. Our community consists of like-minded innovators, change-seekers, and passionate entrepreneurial thinkers. Our fully committed attitude towards our goals makes us the perfect sparring partner for your career, fueling your growth as an expert in your field while expanding your business network.</p>\n<p>MHP is the place for:</p>\n<ul>\n<li>Entrepreneurial thinking. We encourage you to tap into your entrepreneurial flair. Our entrepreneurship creates capacity for development and freedom. This is how we promote growth and achieve ambitious goals.</li>\n<li>Co-creation. We look forward to new impulses, creativity, and drive. See every day as a chance to shape the future alongside other passionate, like-minded colleagues.</li>\n<li>Impact. We encourage you to showcase your authenticity and let your expertise be at the heart of change.</li>\n<li>Growth mindset. Together, we will develop a tailored career path that serves your development as an expert, a leader, and a visionary.</li>\n</ul>\n<p><strong>Experience Level</strong></p>\n<p>Senior</p>\n<p><strong>Employment Type</strong></p>\n<p>Full-time</p>\n<p><strong>Workplace Type</strong></p>\n<p>Onsite</p>\n<p><strong>Category</strong></p>\n<p>IT</p>\n<p><strong>Industry</strong></p>\n<p>Consulting</p>\n<p><strong>Salary Range</strong></p>\n<p>Not stated</p>\n<p><strong>Required Skills</strong></p>\n<ul>\n<li>SAP TM</li>\n<li>ABAP</li>\n<li>OO-ABAP</li>\n<li>BOPF framework</li>\n<li>BRF+</li>\n<li>PPF</li>\n<li>Enhancements &amp; BADIs</li>\n<li>Web Services / OData</li>\n<li>SAP CPI</li>\n<li>PI/PO</li>\n<li>IDocs</li>\n<li>RFC</li>\n<li>Proxy</li>\n<li>OData</li>\n<li>REST</li>\n<li>Fiori/UI5</li>\n<li>Adobe Forms/SmartForms</li>\n<li>Interfaces</li>\n<li>Workflows</li>\n<li>SAP ECC</li>\n<li>SAP/S4HANA</li>\n<li>SAP eWM</li>\n<li>SAP Event Management</li>\n<li>External carrier systems</li>\n<li>Debugging</li>\n<li>Performance tuning</li>\n<li>Communication</li>\n<li>Stakeholder management</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>German</li>\n<li>Fluency in written and spoken English</li>\n<li>Flexibility &amp; willingness to travel</li>\n<li>Entrepreneurial thinking</li>\n<li>Co-creation</li>\n<li>Impact</li>\n<li>Growth mindset</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f5765c8-227","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19968","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["SAP TM","ABAP","OO-ABAP","BOPF framework","BRF+","PPF","Enhancements & BADIs","Web Services / OData","SAP CPI","PI/PO","IDocs","RFC","Proxy","OData","REST","Fiori/UI5","Adobe Forms/SmartForms","Interfaces","Workflows","SAP ECC","SAP/S4HANA","SAP eWM","SAP Event Management","External carrier systems","Debugging","Performance tuning","Communication","Stakeholder management"],"x-skills-preferred":["German","Fluency in written and spoken English","Flexibility & willingness to travel","Entrepreneurial thinking","Co-creation","Impact","Growth mindset"],"datePosted":"2026-03-08T22:15:13.541Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"SAP TM, ABAP, OO-ABAP, BOPF framework, BRF+, PPF, Enhancements & BADIs, Web Services / OData, SAP CPI, PI/PO, IDocs, RFC, Proxy, OData, REST, Fiori/UI5, Adobe Forms/SmartForms, Interfaces, Workflows, SAP ECC, SAP/S4HANA, SAP eWM, SAP Event Management, External carrier systems, Debugging, Performance tuning, Communication, Stakeholder management, German, Fluency in written and spoken English, Flexibility & willingness to travel, Entrepreneurial thinking, Co-creation, Impact, Growth mindset"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_325c968b-d59"},"title":"Inference Technical Lead, Sora","description":"<p><strong>Inference Technical Lead, Sora</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Research</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.</p>\n<p>_<strong>This role is critical to scaling the team’s broader goals - it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.</strong>_</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Perform engineering efforts focused on improving model serving, inference performance, and system efficiency</li>\n</ul>\n<ul>\n<li>Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability</li>\n</ul>\n<ul>\n<li>Partner closely with research and product teams to ensure our models perform effectively at scale</li>\n</ul>\n<ul>\n<li>Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Have deep expertise in model performance optimization, particularly at the inference layer</li>\n</ul>\n<ul>\n<li>Have a strong background in kernel-level systems, data movement, and low-level performance tuning</li>\n</ul>\n<ul>\n<li>Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads</li>\n</ul>\n<ul>\n<li>Can navigate ambiguity, set technical direction, and drive complex initiatives to completion</li>\n</ul>\n<p>_<strong>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong>_</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_325c968b-d59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3c2d1178-777f-4613-a084-75a3d37cd1af","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$380K • Offers Equity","x-skills-required":["GPU Inference Engineer","Model Performance Optimization","Kernel-Level Systems","Data Movement","Low-Level Performance Tuning"],"x-skills-preferred":["AI Systems","Multimodal Workloads","Complex Initiatives","Technical Direction"],"datePosted":"2026-03-06T18:42:26.117Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU Inference Engineer, Model Performance Optimization, Kernel-Level Systems, Data Movement, Low-Level Performance Tuning, AI Systems, Multimodal Workloads, Complex Initiatives, Technical Direction","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":380000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bde5fe7e-c59"},"title":"Backend Engineer, Consumer Devices","description":"<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Consumer Products</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The <strong>Software Engineering</strong> team is responsible for designing and building the scalable, performant, and secure backend systems that power our products—from early prototypes to large-scale deployments. We collaborate closely with product, hardware, and full-stack teams to ensure our infrastructure enables fast iteration while setting a strong foundation for long-term growth.</p>\n<p><strong>About the Role</strong></p>\n<p>As a <strong>Backend Engineer</strong>, you will design and build services, APIs, and infrastructure that support evolving product needs. You’ll apply a deep understanding of backend systems and maintain enough end-to-end context—from hardware to cloud—to guide technical decisions that best serve the product and team.</p>\n<p>We’re looking for engineers who thrive in fast-paced, collaborative environments and care deeply about building robust systems that scale.</p>\n<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model</strong> of four days in the office per week and offer <strong>relocation assistance</strong> to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Architect, build, and maintain high-performance, secure backend systems.</li>\n<li>Design APIs, data models, and infrastructure to support evolving product needs.</li>\n<li>Balance near-term development velocity with long-term maintainability and scalability.</li>\n<li>Collaborate with cross-functional teams to ensure cohesive, end-to-end solutions.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 7+ years of professional software engineering experience, with a focus on backend systems.</li>\n<li>Have a proven track record of building and scaling systems from early stage to large scale.</li>\n<li>Are proficient with Python and Go, and familiar with a range of server-side technologies.</li>\n<li>Have a strong grasp of system design, performance optimization, and security best practices.</li>\n<li>Can reason about full-stack tradeoffs from hardware through cloud infrastructure.</li>\n<li>_(Nice to have)_ Have experience with distributed systems and cloud architectures.</li>\n<li>_(Nice to have)_ Bring a background in instrumentation, analytics, and performance tuning.</li>\n<li>_(Nice to have)_ Are familiar with hardware-cloud integrations or applied AI services.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bde5fe7e-c59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/8e301350-62fb-4251-bc34-c7036498f08c","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $325K • Offers Equity","x-skills-required":["Python","Go","server-side technologies","system design","performance optimization","security best practices"],"x-skills-preferred":["distributed systems","cloud architectures","instrumentation","analytics","performance tuning","hardware-cloud integrations","applied AI services"],"datePosted":"2026-03-06T18:30:44.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, server-side technologies, system design, performance optimization, security best practices, distributed systems, cloud architectures, instrumentation, analytics, performance tuning, hardware-cloud integrations, applied AI services","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32d33889-c44"},"title":"Software Engineer, Caching Infrastructure","description":"<p><strong>Software Engineer, Caching Infrastructure</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>At OpenAI, we’re building safe and beneficial artificial general intelligence. We deploy our models through ChatGPT, our APIs, and other cutting-edge products. Behind the scenes, making these systems fast, reliable, and cost-efficient requires world-class infrastructure.</p>\n<p>The Caching Infrastructure team is responsible for building a caching layer that powers many critical use cases at OpenAI. We aim to provide a high-availability, multi-tenant cache platform that scales automatically with workload, minimizes tail latency, and supports a diverse range of use cases.</p>\n<p>We’re looking for an experienced engineer to help design and scale this critical infrastructure. The ideal candidate has deep experience in distributed caching systems (e.g., Redis, Memcached), networking fundamentals, and Kubernetes-based service orchestration.</p>\n<p><strong><strong>In This Role, You Will:</strong></strong></p>\n<ul>\n<li>Design, build, and operate OpenAI’s multi-tenant caching platform used across inference, identity, quota, and product experiences.</li>\n</ul>\n<ul>\n<li>Define the long-term vision and roadmap for caching as a core infra capability, balancing performance, durability, and cost.</li>\n</ul>\n<ul>\n<li>Collaborate with other infra teams (e.g., networking, observability, databases) and product teams to ensure our caching platform meets their needs.</li>\n</ul>\n<p><strong><strong>You Might Thrive In This Role If You:</strong></strong></p>\n<ul>\n<li>Have 5+ years of experience building and scaling distributed systems, with a strong focus on caching, load balancing, or storage systems.</li>\n</ul>\n<ul>\n<li>Have deep expertise with Redis, Memcached, or similar solutions, including clustering, durability configurations, client-side connection patterns, and performance tuning.</li>\n</ul>\n<ul>\n<li>Have production experience with Kubernetes, service meshes (e.g., Envoy), and autoscaling systems.</li>\n</ul>\n<ul>\n<li>Think rigorously about latency, reliability, throughput, and cost in designing platform capabilities.</li>\n</ul>\n<ul>\n<li>Thrive in a fast-paced environment and enjoy balancing pragmatic engineering with long-term technical excellence.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32d33889-c44","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/a20b7fc6-6f01-4618-ba35-37b40083f93e","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K • Offers Equity","x-skills-required":["distributed caching systems","Redis","Memcached","Kubernetes","service meshes","autoscaling systems"],"x-skills-preferred":["clustering","durability configurations","client-side connection patterns","performance tuning"],"datePosted":"2026-03-06T18:24:00.812Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed caching systems, Redis, Memcached, Kubernetes, service meshes, autoscaling systems, clustering, durability configurations, client-side connection patterns, performance tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a4115e45-d99"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital content technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the content market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer in the TSI team, you will directly impact billions of users by delivering safe, high-quality, and engaging content across products like Windows, Edge, and Outlook. You’ll apply advanced AI and LLM-based techniques to optimize content delivery and user experience. This opportunity will allow you to accelerate your career growth, deepen your understanding of large-scale content systems, and sharpen your skills in AI-driven engineering.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Independently uses appropriate artificial intelligence (AI) tools and practices across the software development lifecycle (SDLC) in a disciplined manner.</li>\n<li>Collaborates with and guides appropriate internal (e.g., product manager, privacy/security subject matter expert, technical lead) and external (e.g. customer escalation team, public forums) stakeholders to determine and confirm customer/user requirements for a project/sub-section of a product/solution.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in large scale system architecture, design, development, testing, and release, including but not limited to web applications, microservices in layers, database design, API design, performance tuning, telemetry design and analysis.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Demonstrable history of excellent analytical and problem-solving skills.</li>\n<li>Demonstrated programming skills and knowledge of architectural patterns for large, high-scale applications.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC4 – The typical base pay range for this role across Canada is CAD $114,400 – CAD $203,900 per year.</li>\n<li>Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a4115e45-d99","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-10/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"CAD $114,400 – CAD $203,900 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","large scale system architecture","design","development","testing","release","web applications","microservices","database design","API design","performance tuning","telemetry design and analysis"],"x-skills-preferred":["data-driven mindset","ability to analyze data and persuade your team using effective analysis"],"datePosted":"2026-03-06T07:26:15.502Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, large scale system architecture, design, development, testing, release, web applications, microservices, database design, API design, performance tuning, telemetry design and analysis, data-driven mindset, ability to analyze data and persuade your team using effective analysis","baseSalary":{"@type":"MonetaryAmount","currency":"CAD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":203900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7ca8bc69-3ec"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital content technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the content market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer in the TSI team, you will directly impact billions of users by delivering safe, high-quality, and engaging content across products like Windows, Edge, and Outlook. You’ll apply advanced AI and LLM-based techniques to optimize content delivery and user experience. This opportunity will allow you to accelerate your career growth, deepen your understanding of large-scale content systems, and sharpen your skills in AI-driven engineering.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Independently uses appropriate artificial intelligence (AI) tools and practices across the software development lifecycle (SDLC) in a disciplined manner.</li>\n<li>Collaborates with and guides appropriate internal (e.g., product manager, privacy/security subject matter expert, technical lead) and external (e.g. customer escalation team, public forums) stakeholders to determine and confirm customer/user requirements for a project/sub-section of a product/solution.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in large scale system architecture, design, development, testing, and release, including but not limited to web applications, microservices in layers, database design, API design, performance tuning, telemetry design and analysis.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Demonstrable history of excellent analytical and problem-solving skills.</li>\n<li>Demonstrated programming skills and knowledge of architectural patterns for large, high-scale applications.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n<li>Software Engineering IC4 – The typical base pay range for this role across Canada is CAD $114,400 – CAD $203,900 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7ca8bc69-3ec","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-9/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"CAD $114,400 – CAD $203,900 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","large scale system architecture","design","development","testing","release","web applications","microservices","database design","API design","performance tuning","telemetry design and analysis"],"x-skills-preferred":["data-driven mindset","ability to analyze data and persuade your team using effective analysis"],"datePosted":"2026-03-06T07:25:44.351Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, large scale system architecture, design, development, testing, release, web applications, microservices, database design, API design, performance tuning, telemetry design and analysis, data-driven mindset, ability to analyze data and persuade your team using effective analysis","baseSalary":{"@type":"MonetaryAmount","currency":"CAD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":203900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22fe1e4f-57a"},"title":"Search Golang Engineer","description":"<p>We are seeking a highly skilled Search Golang Engineer to join our team and help architect the next generation of massively scalable, AI-powered search infrastructure. In this role, you will be responsible for designing, implementing, and operating backend systems that handle millions of queries with uncompromising reliability and efficiency.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As a Search Golang Engineer, you will be responsible for building highly scalable, distributed backend services using Golang. You will design, develop, and maintain search infrastructure that supports exponential traffic growth, engineer cloud-native solutions, and implement robust monitoring, autoscaling, and incident recovery strategies.</p>\n<ul>\n<li>Build highly scalable, distributed backend services using Golang</li>\n<li>Design, develop, and maintain search infrastructure that supports exponential traffic growth</li>\n<li>Engineer cloud-native solutions, optimising for horizontal scale and rapid failover</li>\n<li>Implement robust monitoring, autoscaling, and incident recovery strategies</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need significant experience developing scalable Golang services for production environments. You will also need a deep understanding of distributed systems, microservices, and cloud infrastructure (AWS preferred).</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22fe1e4f-57a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity AI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/30a09c0f-8715-447d-92b7-9f0adb772fd6","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","distributed systems","microservices","cloud infrastructure"],"x-skills-preferred":["Linux performance tuning","monitoring","debugging","CI/CD pipelines","containerization","automation"],"datePosted":"2026-03-04T12:28:06.704Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Berlin, London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, distributed systems, microservices, cloud infrastructure, Linux performance tuning, monitoring, debugging, CI/CD pipelines, containerization, automation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0dd63a6e-d63"},"title":"Search Senior Backend/Infrastructure Engineer","description":"<p>Perplexity is looking for a Senior Infrastructure Engineer to join their small team. The role will involve building and maintaining robust, scalable infrastructure to support high-performance search systems.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As a Senior Infrastructure Engineer, you will be responsible for building and maintaining core systems that power Perplexity&#39;s products and development workflows. This will involve developing internal tools and automation to streamline developer workflows and operational efficiency.</p>\n<ul>\n<li>Build and maintain robust, scalable infrastructure to support high-performance search systems</li>\n<li>Develop internal tools and automation to streamline developer workflows and operational efficiency</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need to have a strong background in cloud infrastructure, systems design, and automation. You will also need to have a deep understanding of Linux internals, performance tuning, and debugging.</p>\n<ul>\n<li>Strong background in cloud infrastructure (AWS preferred), systems design, and automation</li>\n<li>Deep understanding of Linux internals, performance tuning, and debugging</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>This role is critical to ensuring the high-quality product that Perplexity delivers. Your passion and diligence will be essential in ensuring that the company&#39;s products meet the highest standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0dd63a6e-d63","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/dd80ab52-34bd-42af-aa5e-6283b7e6c194","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud infrastructure","systems design","automation","Linux internals","performance tuning","debugging"],"x-skills-preferred":["Python","Go","Rust","C/C++","Java"],"datePosted":"2026-03-04T12:27:34.017Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Berlin, London"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure, systems design, automation, Linux internals, performance tuning, debugging, Python, Go, Rust, C/C++, Java"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0d2198a9-b0a"},"title":"Senior IT Consultant - Commvault","description":"<p>As a Senior IT Consultant - Commvault, you will be responsible for administering, configuring, and optimizing the Commvault platform, including CommServe, Media Agents, Index Servers, and Command Center. You will design and implement scalable backup and recovery solutions across on-prem, hybrid, and cloud environments.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Administer, configure, and optimize the Commvault platform.</li>\n<li>Design and implement scalable backup and recovery solutions.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>At least 5 years hands-on experience with Commvault Complete Backup &amp; Recovery in enterprise environments.</li>\n<li>Strong expertise in Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, and virtualized environments, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0d2198a9-b0a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP - A Porsche Company","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19662","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Commvault Complete Backup & Recovery","Storage Policies","Subclients","Schedules","Performance Tuning","Deduplication Database (DDB) maintenance and troubleshooting","VMware VADP backups","Hyper-V","Cloud storage (Azure, AWS, or GCP)","Enterprise storage systems (NetApp, Dell EMC, HPE, etc.)"],"x-skills-preferred":["Windows Server","Linux (RHEL/CentOS/Ubuntu)","PowerShell","Bash","Python"],"datePosted":"2026-02-18T13:06:27.895Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bucharest, Cluj, Timisoara"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Commvault Complete Backup & Recovery, Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.), Windows Server, Linux (RHEL/CentOS/Ubuntu), PowerShell, Bash, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901a6402-db5"},"title":"Data Engineer","description":"<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Python and SQL</li>\n<li>Hands-on experience with Redshift, Airflow, DBT</li>\n<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901a6402-db5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Redshift","Airflow","DBT","Apache Spark"],"x-skills-preferred":["Apache Flink","Apache Kafka","Hadoop ecosystem components","ETL design patterns","performance tuning"],"datePosted":"2025-12-26T10:57:30.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chengdu"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning"}]}