<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>0fe57d9a-28e</externalid>
      <Title>Engagement Manager</Title>
      <Description><![CDATA[<p>Job Title: Engagement Manager</p>
<p>We are seeking an experienced Engagement Manager to join our team in Tokyo. As an Engagement Manager, you will be responsible for driving customer success by ensuring that our customers are making the most value of our products and services.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Collaborate with sales counterparts to understand customer needs and develop valued solutions</li>
<li>Identify opportunities for new services and articulate the business value</li>
<li>Perform as the Engagement Manager in the assigned area with full accountability for meeting/exceeding Professional Services and Training bookings and revenue targets</li>
<li>Consult with clients to understand and analyze engagement scope, requirements, time, cost, and benefits</li>
<li>Drive resolution of delivery challenges, address resource contentions, scoping issues, and manage expectations</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong fundamental knowledge around Big Data Platforms Implementation from both the technology, operations, and security/governance lenses</li>
<li>Proven experience in selling services offerings at either implementation, advisory, education, and change management capacity</li>
<li>Senior customer-facing roles that require a mix of influencing, validating, negotiating, understanding, and execution to both business and technology audiences</li>
<li>Consistent track record of identifying customer needs and successfully implementing solutions</li>
<li>Owning projects/programs in agile scrum/kanban delivery methodology as well as waterfall methodology</li>
<li>Strong problem-solving skill about customer&#39;s pain points by using modern technologies</li>
<li>Excellence in presentation skills, providing proposals that enforce good project governance and drive scalable delivery practices to both internal and external executives</li>
<li>High-level orchestration skills to align both internal and external stakeholders when proposing large initiatives</li>
<li>Strong service delivery and program management skills with the ability to synthesize customer success outcomes into well-structured program plans that deliver against such outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Prior experience in project/program proposal to customers at Consulting, SI, Software/Cloud Vendor</li>
<li>Bachelor&#39;s degree in Computer Science or related educational background</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
</ul>
<p>Commitment to Diversity and Inclusion:</p>
<ul>
<li>Databricks is committed to fostering a diverse and inclusive culture where everyone can excel</li>
</ul>
<p>Compliance:</p>
<ul>
<li>Access to export-controlled technology or source code is required for performance of job duties, and it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Platforms Implementation, Customer Success, Project Management, Program Management, Agile Scrum/Kanban Delivery Methodology, Waterfall Methodology, Problem-Solving Skill, Presentation Skills, Project Governance, Service Delivery, Prior Experience in Project/Program Proposal to Customers at Consulting, SI, Software/Cloud Vendor, Bachelor&apos;s Degree in Computer Science or Related Educational Background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8501186002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64cc7147-8b9</externalid>
      <Title>Sr. Manager, Technical Solutions</Title>
      <Description><![CDATA[<p>We are looking for a Senior Technical Solutions Manager to grow, lead and manage the Technical solutions engineers and support teams in India. The Senior Technical Solutions Manager is responsible for building and managing a regional team of technical experts focused on resolving highly complex and long-running support tickets raised by Databricks customers while overseeing the Support operations.</p>
<p>Impact you will have:</p>
<ul>
<li>Build and manage a team of Technical Solution Engineers</li>
<li>Provide coaching and mentorship to the engineers</li>
<li>Identify and implement process improvements to meet or exceed regional performance KPIs.</li>
<li>Establish training plans and subject matter expertise within the team.</li>
<li>Drive support escalations and establish cross-functional collaboration to manage and resolve issues.</li>
<li>Be a player-coach and provide technical leadership to the regional support team.</li>
<li>Coordinate with Sales and field teams to address account-level concerns and drive adoption and usage of the Databricks platform.</li>
<li>Define quarterly goals and track them to completion to drive team growth and personal development.</li>
<li>Scale the organisation by developing processes and guidelines that promote operational efficiency</li>
<li>Demonstrate a true sense of ownership and coordinate action items with engineering and escalation teams to achieve timely resolution of customer issues.</li>
<li>Perform risk assessments and be a hands-on leader</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>Minimum 15 years of experience in the Tech Industry</li>
<li>SaaS Support, building, testing, and maintaining</li>
<li>Minimum 6+ years of managerial experience, leading a team of at least 6+ technical support engineers</li>
<li>Proven experience working with cloud native applications/SaaS (AWS, Azure, GCP), big data platforms, or Apache Spark™ in a technical capacity</li>
<li>Demonstrated experience in a customer-facing role managing a large regional team of technical support engineers.</li>
<li>Excellent analytical and troubleshooting skills.</li>
<li>Excellent customer facing, verbal and written communication skills</li>
<li>A team-oriented attitude and a high degree of comfort working in a startup environment</li>
<li>Hands-on experience in systems troubleshooting, networking, and Linux fundamentals, JVM troubleshooting, debugging of Java applications is preferred</li>
</ul>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS Support, Cloud Native Applications, Big Data Platforms, Apache Spark, Java, Linux, Networking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It has over 10,000 organisations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8341135002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2aaf7ac-804</externalid>
      <Title>Security Engineer - Threat Detection</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>You will design, build, and maintain detections that identify malicious activity across Stripe&#39;s infrastructure, applications, and cloud environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and tune high-fidelity detections across modern SIEM platforms, covering adversary TTPs across the full attack lifecycle</li>
<li>Develop detection hypotheses by researching TTPs, identifying evidence sources, and determining detection opportunities across available telemetry</li>
<li>Conduct hypothesis-driven threat hunts to identify malicious activity, uncover detection gaps, and validate security controls</li>
<li>Perform malware analysis and reverse engineering to extract indicators and inform detection strategies</li>
<li>Build network-based detections (flow, pcap, protocol analysis) and endpoint-based detections (event logs, EDR telemetry, memory/file artifacts) across Windows, Linux, and macOS</li>
<li>Partner with Threat Intelligence to operationalize intel reports into detections, hunting leads, and enrichment logic</li>
<li>Collaborate with IR, SOC, and offensive security teams to validate and refine detections based on real-world incidents and red team exercises</li>
<li>Build data pipelines, automation, and tooling that enable detection-as-code practices and scalable deployment</li>
<li>Map detection coverage to MITRE ATT&amp;CK, identifying and prioritizing gaps across key attack surfaces</li>
<li>Lead projects, mentor teammates, and champion quality standards within the team</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience in detection engineering, threat hunting, or security operations</li>
<li>Demonstrated experience writing detection logic in modern SIEM platforms (e.g., Splunk, Chronicle, Elastic, CrowdStrike NG-SIEM, Panther, Microsoft Sentinel)</li>
<li>Strong understanding of adversary tradecraft across the attack lifecycle: initial access, privilege escalation, lateral movement, defense evasion, persistence, and exfiltration</li>
<li>Ability to extract TTPs from threat intelligence reports and translate them into detection opportunities</li>
<li>Experience developing network-based and endpoint-based detections across multiple OS platforms (Windows, Linux, macOS)</li>
<li>Experience analyzing telemetry across endpoint, network, cloud (AWS/GCP/Azure), identity, and application log sources</li>
<li>Proficiency in detection/query languages (SPL, KQL, EQL, YARA-L, SQL) and programming (Python or similar)</li>
<li>Strong communication skills with the ability to document detection logic and explain findings to technical and non-technical audiences</li>
<li>Adversarial mindset , understanding how attackers operate to build detections that catch real-world threats</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience in detection engineering or threat hunting within fintech, financial services, or highly regulated environments</li>
<li>Background in malware analysis, reverse engineering, or threat research</li>
<li>Experience with purple team operations , collaborating with offensive security to validate detections</li>
<li>Familiarity with big data platforms (Databricks, Trino, PySpark) for large-scale log analysis</li>
<li>Proficiency with AI/LLM-assisted development tools (Claude Code, Cursor, GitHub Copilot) applied to detection workflows</li>
<li>Interest in agentic automation , using LLMs to augment hunting, tuning, or triage</li>
<li>Experience with detection validation tools (Atomic Red Team, ATT&amp;CK Evaluations)</li>
<li>Contributions to open-source detection content, research, or conference presentations</li>
<li>Relevant certifications such as HTB CDSA, GCIH, GCFA, GNFA, OSCP, TCM PMAT, or GREM</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>detection engineering, threat hunting, security operations, SIEM platforms, adversary tradecraft, network-based detections, endpoint-based detections, telemetry analysis, detection/query languages, programming, communication skills, fintech, financial services, malware analysis, reverse engineering, purple team operations, big data platforms, AI/LLM-assisted development tools, agentic automation, detection validation tools, open-source detection content, relevant certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7827230</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bb7bb8e9-e31</externalid>
      <Title>Data Engineer - 12 Month TFT</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Data Engineer to join our team at Electronic Arts. As a Data Engineer, you will collaborate with the Marketing team to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our business.</p>
<p>You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also stay informed of the latest trends and research on all aspects of data engineering and analytics.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement and maintain efficient, scalable and robust data pipelines using cloud-native and open-source technologies</li>
<li>Develop and optimize ETL/ELT processes to ingest, transform, and deliver data from diverse sources</li>
<li>Automate deployment and monitoring of data workflows using CI/CD best practices</li>
<li>Guide communications between our users and studio engineers to provide scalable end-to-end solutions</li>
<li>Promote strategies to improve our data modelling, quality and architecture</li>
<li>Participate in code reviews, mentor junior engineers, and contribute to team knowledge sharing</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>
<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>
<li>Experience in data modelling and tools such as dbt, ETL processes, and data warehousing</li>
<li>Experience with at least one of the programming languages like Python, Java</li>
<li>Experience with version control and code review tools such as Git</li>
<li>Knowledge of latest data pipeline orchestration tools such as Airflow</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools (e.g., Docker, Terraform, CloudFormation)</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience in gaming and working with its telemetry data or data from similar sources</li>
<li>Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark, Iceberg</li>
<li>Experience in developing engineering solutions based on near real-time/streaming dataset</li>
<li>Exposure to AI/ML, MLOps concepts and collaboration with data science or AI teams.</li>
</ul>
<p>Pay Transparency - North America</p>
<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>
<p>Pay Ranges: $100,000 - $139,500 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>temporary</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$100,000 - $139,500 CAD</Salaryrange>
      <Skills>SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, Java, Git, Airflow, cloud platforms, infrastructure-as-code tools, gaming telemetry data, big data platforms, EMR, Databricks, Kafka, Spark, Iceberg, AI/ML, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of popular games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-12-month-TFT/212451</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>03233669-78e</externalid>
      <Title>DevOps Engineer II</Title>
      <Description><![CDATA[<p>Job Title: DevOps Engineer II</p>
<p>You will be part of the DevOps team at Helpshift, responsible for creating, maintaining, scaling, and securing infrastructure used by many teams for critical workloads. The team works in various areas, including production and development infrastructure provisioning and maintenance, database infrastructure, automations, core infrastructure, security and compliance, and engineering processes.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain secure CI/CD pipelines for automating deployment, configuration, and testing processes.</li>
<li>Own Helpshift production services and ensure complete monitoring coverage, troubleshoot, and fix production issues.</li>
<li>Build a seamless zero-downtime process to upgrade our core infrastructure (ScyllaDB, Elasticsearch, Kafka, MongoDB, Redis).</li>
<li>Collaborate with development and operations teams to integrate security practices into the software development lifecycle.</li>
<li>Conduct regular security assessments, vulnerability scans, and penetration testing to identify and mitigate security risks.</li>
<li>Develop and maintain infrastructure as code (IaC) templates for provisioning and configuring cloud resources securely.</li>
<li>Monitor and respond to production incidents, including investigation, containment, and remediation activities.</li>
<li>Stay up-to-date with the latest security threats, vulnerabilities, and best practices, and make recommendations for continuous improvement.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Relevant experience of 5+ years and above.</li>
<li>In-depth knowledge of running/managing UNIX-like operating systems (we use Ubuntu).</li>
<li>Strong knowledge of networking protocols, security architectures, and identity and access management (IAM) principles.</li>
<li>Experience with containerisation technologies (e.g., Docker, Kubernetes) and securing containerised environments.</li>
<li>Experience in designing and building solutions that are highly scalable, fault-tolerant, and cost-effective.</li>
<li>Experience of various FOSS tools for monitoring, graphing, capacity planning, and logging.</li>
<li>Experience with IaaC tools like Ansible, Puppet, and Terraform.</li>
<li>Experience with cloud computing platforms like Amazon AWS, Google Cloud Platform, and Heroku.</li>
<li>Experience with managing NoSQL and RDBMS.</li>
<li>Experience with queuing systems (Kafka, RabbitMQ) and Big data platforms (Hadoop).</li>
<li>Good programming skills with a focus on scripting (Python, Shell, Perl).</li>
<li>Ability to analyse bottlenecks in architecture and quickly debug to reach resolution for issues.</li>
<li>Have an automation mindset and the ability to reason and work with complex systems.</li>
<li>Excellent communication and documentation skills.</li>
<li>Quick learner and good mentor for junior team members.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>UNIX-like operating systems, networking protocols, security architectures, identity and access management, containerisation technologies, IaaC tools, cloud computing platforms, NoSQL and RDBMS, queuing systems, Big data platforms, scripting languages</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a software company that provides customer service and support solutions. It has a centralised DevOps team.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/8CC248A4B7</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3f7b2fe6-074</externalid>
      <Title>DevOps Engineer - II</Title>
      <Description><![CDATA[<p>Job Title: DevOps Engineer - II</p>
<p>We are seeking an experienced DevOps Engineer to join our team. As a DevOps Engineer, you will play a pivotal role in ensuring the security, scalability, and reliability of our infrastructure and applications.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain secure CI/CD pipelines for automating deployment, configuration, and testing processes.</li>
<li>Own Helpshift production services and ensure complete monitoring coverage, troubleshoot and fix production issues.</li>
<li>Build a seamless zero-downtime process to upgrade our core infrastructure (ScyllaDB, Elasticsearch, Kafka, MongoDB, Redis).</li>
<li>Collaborate with development and operations teams to integrate security practices into the software development lifecycle.</li>
<li>Conduct regular security assessments, vulnerability scans, and penetration testing to identify and mitigate security risks.</li>
<li>Develop and maintain infrastructure as code (IaC) templates for provisioning and configuring cloud resources securely.</li>
<li>Monitor and respond to production incidents, including investigation, containment, and remediation activities.</li>
<li>Stay up-to-date with the latest security threats, vulnerabilities, and best practices, and make recommendations for continuous improvement.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Relevant experience of 5+ years and above.</li>
<li>In-depth knowledge of running/managing UNIX-like operating systems (we use Ubuntu).</li>
<li>Strong knowledge of networking protocols, security architectures, and identity and access management (IAM) principles.</li>
<li>Experience with containerisation technologies (e.g., Docker, Kubernetes) and securing containerised environments.</li>
<li>Experience in Designing and building solutions that are highly scalable, fault tolerant and cost-effective.</li>
<li>Experience of various FOSS tools for monitoring, graphing, capacity planning, and logging.</li>
<li>Experience with IaaC tools like Ansible, Puppet, Terraform.</li>
<li>Experience with Cloud Computing platforms like Amazon AWS, Google Cloud Platform, Heroku.</li>
<li>Experience with managing NoSQL and RDBMS.</li>
<li>Experience with queuing systems (Kafka, RabbitMQ) and Big data platforms (Hadoop).</li>
<li>Good programming skills with focus on scripting (Python, Shell, Perl).</li>
<li>Ability to analyse bottlenecks in architecture and quickly debug to reach resolution for issues.</li>
<li>Have an automation mindset and ability to reason and work with complex systems.</li>
<li>Excellent communication and documentation skills.</li>
<li>Quick learner and good mentor for junior team members</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>UNIX-like operating systems, Networking protocols, Security architectures, Identity and access management, Containerisation technologies, Infrastructure as code, Cloud Computing platforms, NoSQL and RDBMS, Queuing systems, Big data platforms, Scripting languages</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a software company that provides customer service and support solutions. It has a global presence with a large customer base.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FC0D5C3653</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c5a79e86-69f</externalid>
      <Title>Principal Software Engineer - AI Ads</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer - AI Ads to shape the future of online advertising in Mountain View, CA or Redmond, WA. You&#39;ll lead the design and development of large-scale shopping ads infrastructure that powers billions of products worldwide.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer - AI Ads, you will be responsible for leading the design, development, and optimization of large-scale shopping ads infrastructure and algorithms. You will build and maintain the universal product graph spanning billions of products across multiple languages. You will develop scalable systems for data ingestion, storage, retrieval, and real-time serving at global scale. You will apply machine learning (ML), nature language processing (NLP), and deep learning (DL) models to improve ad relevance, personalization, and selection.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Lead the design, development, and optimization of large-scale shopping ads infrastructure and algorithms.</li>
<li>Build and maintain the universal product graph spanning billions of products across multiple languages.</li>
<li>Develop scalable systems for data ingestion, storage, retrieval, and real-time serving at global scale.</li>
<li>Apply machine learning (ML), nature language processing (NLP), and deep learning (DL) models to improve ad relevance, personalization, and selection.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Deep learning frameworks (e.g., PyTorch, TensorFlow), LLMs/SLMs, and AI Agents.</li>
<li>Cloud services, large-scale big data platforms, and streaming/real-time frameworks (e.g., Kafka, Flink, Spark Streaming), and AI infrastructure development.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunity to work on cutting-edge AI innovation at massive scale.</li>
<li>Collaborative and dynamic work environment.</li>
<li>Professional development opportunities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, PyTorch, TensorFlow, LLMs/SLMs, AI Agents, Kafka, Flink, Spark Streaming, Cloud services, large-scale big data platforms, streaming/real-time frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data analytics. They are known for their innovative solutions and commitment to empowering every person and organization on the planet to achieve more.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-ai-ads/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>