<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0540dd96-198</externalid>
      <Title>Senior Software Engineer - Query Engine, Database Internals - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join the Elasticsearch - Analytical Engine team. This globally-distributed, completely remote team of senior engineers is responsible for building new analytics capabilities in Elasticsearch&#39;s latest aggregation framework based on a completely new compute engine, and accessed via our new piped query language called ES|QL.</p>
<p>This is a senior software engineering role that covers the design and implementation of new features, enhancements to existing features, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable, and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>You&#39;ll be a full-time Elasticsearch contributor, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. You are able to research what available data structures and algorithms work best to implement a new functionality or enhancement. Sometimes you&#39;ll need to implement a data structure or algorithm in the code base. And there will be times when you&#39;ll need to get close to the operating system and hardware.</li>
<li>You&#39;ll work with a globally distributed team of experienced engineers focused on the search and query (ES|QL) analytics capabilities of Elasticsearch. You&#39;ll get to work with the teams that build the UI to ensure a good user experience, and you&#39;ll get to work with the teams building solutions on top of these APIs</li>
<li>You&#39;ll be an expert in several areas of Elasticsearch, and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</li>
<li>You&#39;ll work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts, and sometimes handling them yourself.</li>
<li>You&#39;ll write idiomatic modern Java -- Elasticsearch is 99.8% Java!</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas.</li>
<li>You have experience with software systems engineering</li>
<li>You have a strong desire to optimize and make use of the most efficient data structures and algorithms.</li>
<li>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</li>
<li>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code, approaches, and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</li>
<li>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</li>
<li>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve built things with Elasticsearch before.</li>
<li>You’ve worked in the search and information retrieval space. You’re familiar with the data structures and algorithms associated with information retrieval.</li>
<li>You’ve worked on data storage technology or have experience building data analytics capabilities.</li>
<li>You have experience designing, leading and owning cross-functional initiatives.</li>
<li>You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component.</p>
<p>The typical starting salary range for new hires in this role is listed below. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>
<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being. The typical starting salary range for this role is:$133,100-$210,600 USDThe typical starting salary range for this role in the select locations listed above is:$159,900-$252,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$133,100-$210,600 USD</Salaryrange>
      <Skills>core Java, standard library of data structures and concurrency constructs, newer features like lambdas, software systems engineering, data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7723819</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d7e1a365-9dd</externalid>
      <Title>Principal Software Engineer II - Search Management - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search Management team. This globally-distributed team of experienced engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>As a Principal Software Engineer, you will be a full-time Elasticsearch contributor, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. You&#39;ll work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch.</p>
<p>You&#39;ll be an expert in several areas of Elasticsearch and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</p>
<p>You&#39;ll help us create the future of search within Elasticsearch - building a scalable search tier for our Serverless platform and writing search functionality in ES|QL, our new piped query language as two examples.</p>
<p>You&#39;ll work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself.</p>
<p>You&#39;ll write idiomatic modern Java -- Elasticsearch is 99.8% Java!</p>
<p>We&#39;re looking for someone with strong skills in core Java and a conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas. You should be comfortable developing collaboratively, giving and receiving feedback on code and approaches and APIs.</p>
<p>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</p>
<p>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</p>
<p>Bonus points if you&#39;ve built things with Elasticsearch before, worked in the search and information retrieval space, or have experience writing code for software-as-a-service or platforms-as-a-service.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,000-$243,600 CAD</Salaryrange>
      <Skills>core Java, standard library of data structures and concurrency constructs, newer features like lambdas, data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra, idiomatic modern Java, search and information retrieval space, software-as-a-service or platforms-as-a-service, collaborative development, code review, API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. They provide a cloud-based solution for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7699084</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bdf949b3-c66</externalid>
      <Title>Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>
<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>
<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>
<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>
<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>
<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>
<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>
<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>
<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>
<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>
<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>
<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>
<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>
<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>
<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>
<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>
<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>
<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>
<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>
<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>
<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>
<li>Advise executive leadership on security architecture, risks, and mitigation.</li>
<li>Mentor security engineers and developers on secure design and best practices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>
<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>
<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>
<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>
<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>
<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>
<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>
<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>
<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>
<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>
<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>
<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>
<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>
<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>
<li>Ability to influence stakeholders and drive alignment.</li>
<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a technology company that provides a cloud-based platform for data analytics and artificial intelligence.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8207910002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b00b5e68-8bf</externalid>
      <Title>Principal Software Developer I / II - Storage Engine - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Developer I or II to join the Elasticsearch - Storage Engine team. This globally-distributed, completely remote team of senior engineers is responsible for delivering the latest innovations in logs and metrics management.</p>
<p>This role includes providing technical vision and direction for building solutions that provide optimized storage and efficient data querying and indexing. This role requires related past technical experience in addition to the ability to work cross-organisation.</p>
<p>Our company is distributed by intention. We hire the best developers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p>As a Principal Software Developer, you will lead cross-organisational initiatives to produce an industry-leading Timeseries solution offering. You will contribute to Elasticsearch full time, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. Sometimes you&#39;ll need to implement a data structure or algorithm in the code base. And there will be times when you&#39;ll need to get close to the operating system and hardware.</p>
<p>You will work with a globally distributed team of experienced engineers focused on the logs and metrics capabilities of Elasticsearch. You will be an expert in several areas of Elasticsearch and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</p>
<p>You will work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself. You will write idiomatic modern Java -- Elasticsearch is 99.8% Java!</p>
<p>We&#39;re looking for someone who has implemented novel techniques to efficiently index, store and query Timeseries data. You should have strong technical leadership skills, work with a high level of autonomy, and be able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</p>
<p>You should have strong skills in core Java and be conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas. You should have a strong desire to optimise and make use of the most efficient data structures and algorithms.</p>
<p>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code and approaches and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder but we do it anyway because it&#39;s the best way we know how to build software.</p>
<p>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way. You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</p>
<p>Bonus points if you&#39;ve built things with Elasticsearch before. Bonus points if you&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,500-$304,500 CAD</Salaryrange>
      <Skills>Java, Elasticsearch, Timeseries data, Data structures, Concurrency constructs, Lambdas, Data storage technologies, Open source projects, Source control workflow, Continuous integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7348825</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b11628aa-8d2</externalid>
      <Title>Principal Software Engineer - Search Relevance - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search team. This globally-distributed team of expert engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>This is a principal software engineering role that focuses on enhancing the vector and keyword search functionality within Elasticsearch, covering the design and implementation of new search features, enhancements to existing search functionality, and resolving bugs.</p>
<p>As a Principal Software Engineer, you will lead initiatives within Elasticsearch to produce an industry-leading search engine offering, supplying unparalleled speed and relevance in search. You will contribute to Elasticsearch full time, building new search features and fixing intriguing bugs, all while making the code easier to understand. Sometimes you&#39;ll need to invent a new algorithm or data structure. Or find one and implement it. Sometimes you&#39;ll need to get close to the operating system and hardware.</p>
<p>You will work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch. You will be an expert on Elasticsearch search relevance. You&#39;ll identify and drive improvements in this area based on your questions and your instincts.</p>
<p>You will work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself. You will write idiomatic modern Java -- Elasticsearch is 99.8% Java!</p>
<p>Professional experience with search and vector databases, and you used HNSW, IVF, or other relevant algorithms and libraries on search platforms at scale. Strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as other features like lambdas. You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</p>
<p>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code and approaches and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</p>
<p>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way. Excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</p>
<p>Bonus points if you&#39;ve built things with Elasticsearch before. You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration. You have experience designing, leading and owning cross-functional initiatives.</p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $159,800-$252,800 USD. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$159,800-$252,800 USD</Salaryrange>
      <Skills>search and vector databases, HNSW, IVF, or other relevant algorithms and libraries, core Java, data structures and concurrency constructs, lambdas, collaboration, source control workflow and continuous integration, cross-functional initiatives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a commercial company that provides a search and analytics platform called Elasticsearch. The company has a large customer base, including over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7699665</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f2aab07-b31</externalid>
      <Title>Principal Software Engineer - Search Relevance - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search team. This globally-distributed team of expert engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>This is a principal software engineering role that focuses on enhancing the vector and keyword search functionality within Elasticsearch, covering the design and implementation of new search features, enhancements to existing search functionality, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p>As a Principal Software Engineer, you will lead initiatives within Elasticsearch to produce an industry-leading search engine offering, supplying unparalleled speed and relevance in search. You will contribute to Elasticsearch full time, building new search features and fixing intriguing bugs, all while making the code easier to understand. Sometimes you&#39;ll need to invent a new algorithm or data structure. Or find one and implement it. Sometimes you&#39;ll need to get close to the operating system and hardware.</p>
<p>You will work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch. You will be an expert on Elasticsearch search relevance. You&#39;ll identify and drive improvements in this area based on your questions and your instincts.</p>
<p>You will work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself. You will write idiomatic modern Java -- Elasticsearch is 99.8% Java!</p>
<p>We&#39;re looking for someone with professional experience with search and vector databases, and you used HNSW, IVF, or other relevant algorithms and libraries on search platforms at scale. You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as other features like lambdas.</p>
<p>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components. You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code and approaches and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</p>
<p>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way. You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</p>
<p>Bonus points if you&#39;ve built things with Elasticsearch before. You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration. You have experience designing, leading and owning cross-functional initiatives.</p>
<p>At Elastic, we strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do. We offer competitive pay based on the work you do here and not your previous salary. We provide health coverage for you and your family in many locations. We allow you to craft your calendar with flexible locations and schedules for many roles. We offer generous vacation days each year. We increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service. We provide up to 40 hours each year to use toward volunteer projects you love. We embracing parenthood with minimum of 16 weeks of parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>search and vector databases, HNSW, IVF, or other relevant algorithms and libraries, core Java, standard library of data structures and concurrency constructs, lambdas</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7544081</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>695657b2-bfc</externalid>
      <Title>Senior Software Engineer, Data Acquisition</Title>
      <Description><![CDATA[<p>We are seeking a senior engineer to join our Data Acquisition (DA) team. Engineers at Zus have the opportunity to collaborate with our founding product and engineering leaders to bring our vision to the nation’s healthcare entrepreneurs.</p>
<p>The engineer joining this team will help build tools that interact with external health data networks to collect information about our patients and load it into the Zus data stores at high volume, as well as services used by customers and internal stakeholders to request that data.</p>
<p>You will work on data pipelines that operate on large scale data using a variety of AWS services (Step Functions, Lambda, DynamoDB, S3, etc). You will also work on RESTful services that are used both internally and externally. Go is our language of choice, although we also have some components written in NodeJS.</p>
<p>The team is responsible for deploying, maintaining, and operating its pipelines and services. Our Zus engineering teams are all US-based, and we hire only in the US.</p>
<p>In Data Acquisition, we work across a collection of US timezones and also collaborate with our development partners in Central European Time.</p>
<p>Zus supports both remote work and hybrid work in the Boston area with an office near South Station, and our teams are a mix of both styles of work.</p>
<p>We actively work to make sure all voices are heard and information is shared regardless of your work location.</p>
<p><strong>You&#39;re a good fit because you...</strong></p>
<ul>
<li>Are scrappy and you move fast</li>
<li>Have experience with operationally stable and cost efficient data pipelines</li>
<li>Enjoy owning your work and seeing it deploy safely in production</li>
<li>Have experience building backend software in any language (we use mostly Go with a bit of Node)</li>
<li>Have some experience with at least one of the following: deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP)), and Infrastructure as Code (Terraform, CloudFormation, Chef)</li>
<li>Are excited to ~ finally! ~ enable a true digital revolution in healthcare</li>
<li>Thrive amid the changing landscape of a growing and evolving startup</li>
<li>Enjoy collaboration and solving unique problems</li>
<li>Are comfortable working remotely (EST/CST preferred as that is where our team is located) and are willing to travel for in person collaboration occasionally</li>
</ul>
<p><strong>It would be awesome if you were...</strong></p>
<ul>
<li>Experienced in building and running large-scale systems in the cloud</li>
<li>Experienced in building services and APIs used by third-party developers</li>
<li>Knowledgeable about application security</li>
<li>Experienced in working with healthcare data and APIs</li>
<li>Familiar with the FHIR and/or TEFCA standards</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>This role can be hybrid in Boston or mostly remote. We’re flexible, because we trust our people to do great work wherever they’re most productive. We’re proudly remote-first, but not strangers by any means. We get together a few times a year to build real rapport, align on strategy, and connect as people.</p>
<p>We believe strong culture is built on trust, transparency, and showing up online or or in person. So yes, work from where you thrive… and plan on the occasional gathering where the strategy is sharp, the conversations are candid, and the snacks are usually excellent.</p>
<p>We will offer you…</p>
<ul>
<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>
<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>
<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-180,000 per year</Salaryrange>
      <Skills>Go, NodeJS, AWS services (Step Functions, Lambda, DynamoDB, S3, etc), RESTful services, deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP), Infrastructure as Code (Terraform, CloudFormation, Chef), building and running large-scale systems in the cloud, building services and APIs used by third-party developers, application security, working with healthcare data and APIs, FHIR and/or TEFCA standards</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/775b2ba8-80ee-4d7b-8bfb-0bab2b094793</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>608305cb-5a6</externalid>
      <Title>Software Engineer (SWE I / SWE II)</Title>
      <Description><![CDATA[<p>We are looking for a Software Engineer to join our Lab Systems team. As a Software Engineer, you will work closely with engineers, product partners, and laboratory scientists to build and evolve internal software systems that support the design, build, and testing of therapeutic antibodies at scale.</p>
<p>This role is designed for engineers with several years of experience building and supporting production software who are excited to grow their technical scope and domain impact. Depending on experience and demonstrated impact, this role may be leveled as Software Engineer I (SWE I) or Software Engineer II (SWE II).</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement, test, and maintain features in BigHat&#39;s internally developed, cloud-based LIMS+ platform.</li>
<li>Work independently on well-scoped features and improvements, following work through implementation, testing, and release.</li>
<li>Collaborate closely with cross-functional partners (scientists, product owners, and other engineers) to translate real-world lab workflows into reliable software.</li>
<li>Participate actively in engineering ceremonies, technical discussions, and code reviews.</li>
<li>Own the quality and outcomes of your work, including debugging, test failures, and production issues.</li>
</ul>
<p>This role reports to the Lab Systems Lead and works closely with the Lab Systems Product Owner, with responsibilities that impact teams across BigHat.</p>
<p>About You:</p>
<ul>
<li>You have experience contributing to and owning work in a production software environment.</li>
<li>You are comfortable working independently on small to medium features and improvements.</li>
<li>You communicate clearly about progress, risks, and tradeoffs, and collaborate effectively with peers and partners.</li>
<li>You take ownership of your work and follow issues through to resolution.</li>
<li>You are curious and motivated by building software that supports real users doing complex work.</li>
</ul>
<p>Experience:</p>
<ul>
<li>3–5 years of professional software engineering experience building production systems OR 2+ years of professional software engineering experience with prior experience in biotech, life sciences, laboratory environments, or scientific software, where domain knowledge meaningfully accelerates impact.</li>
</ul>
<p>Relevant Tech / Skills:</p>
<ul>
<li>Experience with some (not necessarily all) of the following: TypeScript, React, Material-UI, Vega, Python 3, SQLAlchemy, RESTful API design, AWS (CDK, Lambda, Step Functions, ECS/Batch, Fargate, API Gateway, Athena), relational databases (e.g., PostgreSQL), Pandas, PyTorch or other ML frameworks (nice to have, not required).</li>
</ul>
<p>Benefits:</p>
<ul>
<li>The salary estimated for this position is $135,000 - $175,000 + bonus + options + benefits. Compensation will vary depending on job-related knowledge, skills, and experience. Actual compensation will be confirmed in writing at the time of the offer.</li>
<li>Range of health insurance plan options through Anthem and Kaiser (monthly credit if benefit waived)</li>
<li>Dental, and vision coverage through Guardian</li>
<li>Additional well-being benefits through Nayya, OneMedical, Wagmo, Rula, and more</li>
<li>401(k) with company match</li>
<li>DTO, two weeks of company-wide shutdown, and 12 company holidays</li>
<li>Paid parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>Hybrid</Workarrangement>
      <Salaryrange>$135,000 - $175,000 + bonus + options + benefits</Salaryrange>
      <Skills>TypeScript, React, Material-UI, Vega, Python 3, SQLAlchemy, RESTful API design, AWS (CDK, Lambda, Step Functions, ECS/Batch, Fargate, API Gateway, Athena), relational databases (e.g., PostgreSQL), Pandas, PyTorch or other ML frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Biotechnology</Industry>
      <Employername>Bighatbiosciences</Employername>
      <Employerlogo>https://logos.yubhub.co/bighat.bio.png</Employerlogo>
      <Employerdescription>BigHat Biosciences is a biotechnology company that develops and manufactures therapeutic antibodies.</Employerdescription>
      <Employerwebsite>https://bighat.bio/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://bighatbiosciences.pinpointhq.com/en/postings/9c33a0d3-782d-4e9e-9b3c-6609cb47f704</Applyto>
      <Location>San Mateo, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b2fcfe0b-0dd</externalid>
      <Title>FBS AWS Data Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. This position works on data projects of intermediate complexity to lead in the design, development, and implementation of data products.</p>
<p>Key Responsibilities
• Prep and cleanse data to optimize for downstream reporting via Farmers standard visualization or AI/ML tools with coaching and feedback
• Translate business data stories into a technical story breakdown structure and work estimates for a schedule or planned agile sprint
• Develop and maintain moderately complex scalable data pipelines for both streaming and batch requirements and build out new API integrations to support increased demands of data volume and complexity
• Produce data building blocks, data models, and data flows for varying client requests such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research and exploration
• Create business user access methods to structured and unstructured data. Utilize techniques such as mapping data to a common data model, natural language processing, transforming data as necessary to satisfy business rules, AI, statistical computations, and validation
• Responsible for acquiring, curating, and publishing data both on prem and in the cloud for analytical or operational uses for basic to moderate scenarios
• Ensure the data is in a ready-to-use form that creates a single version of the truth across all data consumers, including business/technology users, reporting and visualization specialists and data scientists with coaching and support
• Utilize skills to translate business analytic requests/requirements into design, development, testing, deployment, and production maintenance tasks
• Works with various technologies from big data, relational and non-relational databases, cloud environments, different programming languages and various reporting tools and is familiar with a few but requires training for some</p>
<p>Requirements
• 4-6 years of experience in a similar as a Data Engineer with AWS Tools
• BS in Computer Science or similar
• Full English Fluency
• Exp Insurance within finance area (PLUS)</p>
<p>Technical Experience
• Python and SQL – Intermediate (MUST)
• AWS tools such as AWS Glue, S3, AWS Lambda, Iceberg and Lake Formation (MUST)
• Snowflake - Intermediate (4-6 Years) (MUST)
• DBT - Entry Level (1-3 Years) (MUST)
• AWS Cloud Data - Intermediate (4-6 Years) (MUST)
• MSSQL - Entry Level (1-3 Years) (Desirable)
• Communications - Intermediate
• Office Suite - Intermediate
• Rally - Entry Level or similar
• Agile - Entry Level, knowledge</p>
<p>Benefits
This position comes with a competitive compensation and benefits package.
• A competitive salary and performance-based bonuses.
• Comprehensive benefits package.
• Flexible work arrangements (remote and/or office-based).
• You will also enjoy a dynamic and inclusive work culture within a globally renowned group.
• Private Health Insurance.
• Paid Time Off.
• Training &amp; Development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, AWS Glue, S3, AWS Lambda, Iceberg, Lake Formation, Snowflake, DBT, AWS Cloud Data, MSSQL, Communications, Office Suite, Rally, Agile</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nog4LBbHddk4ZFvf6Bfqdh/remote-fbs-aws-data-engineer-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c06ee3af-d25</externalid>
      <Title>Software Engineer II- Full Stack</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II, you will be part of a product team focused on managing a highly available test-orchestration platform-as-a-service for EA game titles and internal product teams.</p>
<p>This platform enables the execution of large-scale performance and load tests, helping ensure products and game titles are stable, scalable, and launch-ready.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate with architect, senior engineers, and product stakeholders to design and deliver distributed, scalable, secured platform solutions that enhance player experience.</li>
<li>Build responsive frontend interfaces using React and develop backend services and APIs using Python and Java.</li>
<li>Contribute across the full product lifecycle — requirements gathering, design, implementation, testing, deployment, and production support.</li>
<li>Write clean, maintainable, and well-tested code following engineering best practices, and participate in peer code reviews.</li>
<li>Improve platform reliability, scalability, and maintainability by resolving production issues, reducing technical debt, and optimizing system performance.</li>
<li>Troubleshoot live incidents, identify root causes, and implement fixes to maintain high service reliability.</li>
<li>Collaborate with cross-functional teams and internal product users to gather feedback, extend platform capabilities, and support operational needs.</li>
<li>Support automation initiatives including CI/CD pipelines, testing frameworks, and developer tooling to improve team efficiency.</li>
<li>Contribute to observability through logging, metrics, and alerts, and maintain clear technical documentation for services, APIs, and operational procedures.</li>
<li>Leverage modern development tools, including AI-assisted engineering workflows, to enhance productivity and code quality.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Computer Engineering, or a related field.</li>
<li>3–6 years of hands-on software engineering and full-stack development experience.</li>
<li>Proficient in multiple programming languages and frameworks, including Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux.</li>
<li>Strong understanding of end-to-end system design, distributed computing, scalable platform architecture</li>
<li>Experience building and integrating REST APIs following best practices</li>
<li>Experience with cloud computing services such as AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway and IAM.</li>
<li>Solid grasp of networking fundamentals (TCP/IP, DNS resolution, TLS/SSL, HTTP/HTTPS) and how internet communication works</li>
<li>Skilled in DevOps pipelines and CI/CD workflows, particularly using GitLab &amp; Jenkins.</li>
<li>Hands-on experience with containerization, orchestration, and infrastructure tools such as Docker, Kubernetes, and Terraform.</li>
<li>Proficient with SQL(MySQL) and NoSQL(MongoDB) databases</li>
<li>Strong collaboration skills, with the ability to work effectively in cross-functional teams and adept at solving complex technical problems.</li>
<li>Excellent written and verbal communication, with a motivated, self-driven approach and the ability to operate autonomously.</li>
</ul>
<p><strong>Bonus Qualifications:</strong></p>
<ul>
<li>Familiar with multiple cloud service offerings like GCP, Azure</li>
<li>Familiar with load testing frameworks like Gatling, K6</li>
<li>Familiar with GoLang, ClickhouseDB</li>
<li>Familiar in visualization &amp; monitoring tools (like Prometheus, Grafana, Loki, Datadog etc.,)</li>
</ul>
<p><strong>About Electronic Arts</strong></p>
<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux, AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway, IAM, SQL, NoSQL, DevOps, CI/CD, Docker, Kubernetes, Terraform, GCP, Azure, Gatling, K6, GoLang, ClickhouseDB, Prometheus, Grafana, Loki, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of over 300 million registered players. The company has a global presence with locations in multiple countries.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/212826</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>a5a3da11-044</externalid>
      <Title>Software Engineer - III</Title>
      <Description><![CDATA[<p>Electronic Arts is looking for a Software Engineer - III to join its team in Hyderabad, India. As a Software Engineer - III, you will work as a Lead Java developer, involved in developing scalable solutions for millions of players around the globe. You will apply the latest technologies to implement modern, sleek applications.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with our partners to develop scalable and efficient solutions to improve players&#39; experience</li>
<li>Develop high-volume, low-latency Java applications or backend APIs using Java, Spring Boot, and Microservices</li>
<li>Build frontend design and integrations with backend services</li>
<li>Work on cloud-native serverless solutions to achieve product capabilities</li>
<li>Lead the deliverables of a product line</li>
<li>Be responsible for code quality and efficiency, including unit tests</li>
<li>Collaborate with the best designers, engineers of different technical backgrounds, and architects</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science Engineering or equivalent with overall 8+ years of experience as a Lead Full Stack Java engineer</li>
<li>Minimum 8+ years of solid hands-on experience in Core Java, Spring, Spring Boot, Microservices</li>
<li>Minimum 2+ years of experience working in frontend technologies like NextJS, React, or Angular and TypeScript/JavaScript along with advanced CSS technologies like Tailwind or Bootstrap</li>
<li>Excellent knowledge of design patterns and scalable architectures</li>
<li>Understand requirements and create APIs from scratch using Spring Boot</li>
<li>Experience using cloud services in AWS like Lambda, S3, EC2, Step Functions, or similar cloud products</li>
<li>Good experience with SQL and NoSQL databases and their query languages</li>
<li>Good experience writing unit tests using JUnit to ensure production-ready code with minimalistic bugs</li>
<li>Understanding of containerization concepts with platforms like Docker and Kubernetes</li>
<li>Experience with Agile methodologies to iterate quickly on product changes, develop user stories, and work through backlogs</li>
<li>Experience mentoring developers and leading technical programs</li>
<li>Experience communicating updates and resolutions to customers and other partners clearly</li>
<li>Strong problem-solving abilities and judgment in technical decision-making</li>
<li>Experience with Agile methodologies to iterate quickly on product changes</li>
</ul>
<p>What you will need to be successful:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or equivalent</li>
<li>Over 8 years of hands-on Java development experience, including deep expertise in Spring Boot, AWS, Microservices</li>
<li>Learn from other experienced developers and architects</li>
<li>Have a good eye for clean design and best coding practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, Microservices, NextJS, React, Angular, TypeScript, JavaScript, Tailwind, Bootstrap, AWS, Lambda, S3, EC2, Step Functions, SQL, NoSQL, Docker, Kubernetes, Agile methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading entertainment company that creates games and experiences for millions of players around the world.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/212861</Applyto>
      <Location>Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ca74859f-839</externalid>
      <Title>Senior FullStack Engineer: Offsite Discovery</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re looking for a Senior Fullstack Engineer to join our Recommendation Cross-Channel &amp; Offsite Discovery team. As a key member of our team, you will help us build our Customer Dashboard interface for customers to easily manage their marketing campaigns.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Implement New Features: Develop customer dashboard features using TypeScript and React. These features will interact with our backend services, which are built with Python and FastAPI.</li>
<li>Innovate and Strategize: Participate in brainstorming sessions to develop new features and tools that will shape the future of Offsite Discovery.</li>
<li>Collaborate on Functionality: Work with both technical and non-technical business partners to develop and update application functionalities.</li>
<li>Communicate with Stakeholders: Keep stakeholders, both inside and outside the team, informed about project progress and developments.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong foundation with client-side JavaScript, computer science background &amp; familiarity with networking principles.</li>
<li>Solid experience with TypeScript and frontend frameworks like React.</li>
<li>Experience building, maintaining, and debugging full-stack web applications.</li>
<li>Experience with Python and one of the backend frameworks like FastAPI, Flask or Django, or willingness to learn and work with this stack.</li>
<li>Good understanding of API design principles.</li>
<li>Familiarity with Service-Oriented Architecture (SOA).</li>
<li>Experience with relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL).</li>
<li>Analytical skills and experience with SQL to gather insights into dashboard reports and solutions (ClickHouse, Athena).</li>
<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP.</li>
<li>Experience collaborating in cross-functional teams.</li>
<li>Excellent English communication skills.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Familiarity with serverless design patterns, particularly with AWS Lambda.</li>
<li>Experience working in remote environments.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>
<li>Fully remote team - choose where you live.</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee, refreshed each year.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Work with smart people who will help you grow and make a meaningful impact.</li>
<li>This position has a base salary range between $80k and $120k USD.</li>
</ul>
<p><strong>Diversity, Equity, and Inclusion at Constructor</strong></p>
<p>At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k-$120k USD</Salaryrange>
      <Skills>client-side JavaScript, TypeScript, React, Python, FastAPI, API design principles, Service-Oriented Architecture (SOA), relational databases, distributed systems, caching solutions, SQL, ClickHouse, Athena, AWS, Azure, GCP, serverless design patterns, AWS Lambda, remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for e-commerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FD7F051B3C</Applyto>
      <Location>Portugal</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>70fe3dd2-f85</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Senior Data Engineer to work on our Data Infrastructure Team. This team is responsible for building and maintaining the Data Platform, a comprehensive set of tools and infrastructure used daily by every data scientist and ML engineer in our company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Job scheduling and orchestration for data pipelines.</li>
<li>Deployment and management of BI tools.</li>
<li>Real-time analytics infrastructure (ClickHouse, AWS Lambda, Cube.js, and related tooling).</li>
<li>Real-time log ingestion and processing, including data compliance.</li>
<li>Core data services (e.g., Kubernetes, Ray, metadata services) and enterprise-wide observability solutions (based on ClickHouse and OpenTelemetry).</li>
</ul>
<p><strong>Requirements</strong></p>
<p>We are seeking an engineer with at least 4 years of experience who possesses strong programming skills (ideally in Python), and expertise in big data engineering, web services, and cloud platforms (ideally AWS). We are looking for someone eager to build diverse components and drive the evolution of our platform while working closely with our users. Excellent English communication skills and robust computer science background is a strong requirement.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>
<li>Fully remote team - choose where you live</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>This position has a base salary range between $80k and $120k USD. The offer varies on many factors including job related knowledge, skills, experience, and interview results.</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k - $120k USD</Salaryrange>
      <Skills>Python, big data engineering, web services, cloud platforms (AWS), ClickHouse, AWS Lambda, Cube.js</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, built to optimize for metrics like revenue, conversion rate, and profit.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/C6407C4CB5</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>e9d432ac-fb7</externalid>
      <Title>Implementation Engineer</Title>
      <Description><![CDATA[<p>We are seeking an Implementation Engineer to join our team in Pune. As an Implementation Engineer, you will be responsible for developing Infrastructure-as-Code using Terraform, CDK, or Pulumi, and developing CI/CD pipelines for Cloud Deployments. You will also be responsible for developing custom automation using Lambda Functions or Azure functions. Your tasks will include managing code repositories, performing peer code reviews, and maintaining code hygiene. You will also be responsible for mentoring and training junior engineers in your pod. You will work side-by-side with customers to design, diagram, and document complex integrations, and then build and deploy these designs. You will also establish partnerships and strategic relationships with contacts at our biggest brands. Time management is critical, and you should be able to manage multiple tasks and projects simultaneously. You will also be responsible for analysing and auditing existing Helpshift implementations to make improvements. You will become an expert at using Helpshift&#39;s administrative tools, which include a suite of AI products, bots, and other mission-critical support functions. You will work collaboratively with Account Managers, Customer Success Managers, and Sales teams to ensure customers&#39; overall success with the product. You will continually optimise the overall development process with improvements to documentation, trainings, and other customer-facing content.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop Infrastructure-as-Code using Terraform, CDK, or Pulumi</li>
<li>Develop CI/CD pipelines for Cloud Deployments</li>
<li>Develop custom automation using Lambda Functions or Azure functions</li>
<li>Manage code repositories and perform peer code reviews</li>
<li>Maintain code hygiene and write test cases for solutions</li>
<li>Mentor and train junior engineers in your pod</li>
<li>Work side-by-side with customers to design, diagram, and document complex integrations</li>
<li>Establish partnerships and strategic relationships with contacts at our biggest brands</li>
<li>Analyse and audit existing Helpshift implementations to make improvements</li>
<li>Become an expert at using Helpshift&#39;s administrative tools</li>
<li>Work collaboratively with Account Managers, Customer Success Managers, and Sales teams</li>
<li>Continually optimise the overall development process</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3 years of SaaS experience in a specialization such as consulting services, technical pre-sales, or solution engineering</li>
<li>Proven experience translating ambiguous customer requirements into actionable technical solutions</li>
<li>Proficiency with Python, Go, C#, Node.js, or Powershell</li>
<li>Proficiency with deploying cloud solutions on AWS or Azure</li>
<li>Familiarity with technical SaaS concepts such as SDKs, APIs, and cloud computing</li>
<li>Exceptional organisational skills and a project management mindset</li>
<li>Understanding of Object-Oriented Programming concepts</li>
<li>Excellent communication skills and ability to lead meetings with customer executives and analysts</li>
<li>Proficiency in G-Suite and ability to perform data analysis tasks</li>
<li>Curiosity about complex systems and natural problem-solving skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in Pune</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Infrastructure-as-Code, Terraform, CDK, Pulumi, CI/CD pipelines, Cloud Deployments, custom automation, Lambda Functions, Azure functions, code repositories, peer code reviews, code hygiene, test cases, mentoring, training, junior engineers, customer success, account management, sales, Python, Go, C#, Node.js, Powershell, AWS, Azure, SDKs, APIs, cloud computing, G-Suite, data analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a software company that provides customer support solutions. It has a global presence with a team operating across different time zones.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/450F76EA64</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>