<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>0e245c82-f41</externalid>
      <Title>Information Security Director</Title>
      <Description><![CDATA[<p>We&#39;re looking for an ambitious information security or cyber specialist to join our team as an Information Security Director. As a key member of our Office of the CISO, you will lead the continuous improvement of our Information Security capabilities and governance. You will be responsible for managing and maintaining the Information Security Policy Framework across Starling Group, establishing and maintaining standards for control implementation procedures, and liaising with external bodies and organisations to keep abreast of emerging trends, technologies, and legislation that have an impact on Information Security.</p>
<p>You will also support security controls assessment, and accreditation, e.g. ISO/IEC 27001, manage the input to the Information Security Risk Register and ensure coherence with the Bank&#39;s Risk Management framework, and act as point of contact for the second, third line of defense and other stakeholders (e.g. Legal, Regulatory Affairs) and coordinate audit and request response.</p>
<p>In addition, you will manage the creation of Board, committees and regulatory engagement meeting material and communication, establish a framework and manage the Information Security reporting capability incl. regular revision of Key Risk and Performance Indicators, and support the CISO in the yearly Information Security strategy review and roadmap definition.</p>
<p>You will also manage the Information Security related budget, and have previous experience working in a complex IT organisation encompassing service delivery, application development and IT infrastructure.</p>
<p>You will have an understanding of best practice within Information Security and risk management including standards such as ISO/IEC 27001, NIST, Cyber Essentials and COBIT, and an understanding of legislation and regulations that impact information Security. E.g. Data Protection Act and GDPR, DORA, Freedom of Information Act, PCI DSS.</p>
<p>You will also have previous experience in leading, developing and motivating a team, and an understanding of current and emerging threats and countermeasures and the organisational challenges to addressing these threats.</p>
<p>You will have a good knowledge of security technologies and wider business solutions including Identity and access management, security monitoring, and data security technologies, and a good understanding of financial services and awareness of broader requirements.</p>
<p>You will share knowledge and provide guidance on internal bank first line related processes, and take responsibility and do the right thing for customers, colleagues and partners.</p>
<p>It would be great if you have one or more of the following qualifications, but it&#39;s not essential; Certified Information Security Manager (CISM), Certified Information Systems Security Professional (CISSP), or Certified Information Systems Auditor.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Information Security, Risk Management, Compliance, Cybersecurity, Cloud Security, Identity and Access Management, Security Monitoring, Data Security</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a fully licensed UK bank with over 3,000 employees across four offices in London, Southampton, Cardiff, and Manchester.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/174B958FED</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5b50c2ef-97b</externalid>
      <Title>GenAI Engineer</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p>We are looking for technically hands-on professionals to design and deliver client-centric intelligent systems and support business growth through strategic pre-sales and solutioning initiatives. As part of our growing Enterprise AI consulting practice, we deliver real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture.</p>
<p>Key Responsibilities: Design and implement AI agents using LangChain, CrewAI, or AutoGen for enterprise-grade use cases. Develop modular code for task decomposition, memory handling, and tool integration with APIs. Collaborate with AI strategists and architects on project design and MVP delivery. Conduct testing and fine-tuning of agent behavior using LLM APIs and embeddings. Participate in internal code reviews, documentation, and reusable framework building. Support pre-sales demos and client innovation sessions with hands-on prototypes.</p>
<p>Requirements: Bachelor&#39;s or Master&#39;s degree in Computer Science, AI, or related field. PhD preferred for architect-level roles. 3+ years in AI/ML, with recent experience in agentic systems and hands-on development of LLM-based applications. Strong experience with Python and orchestration libraries such as LangChain, LlamaIndex, Semantic Kernel, AutoGen, or similar. Deep knowledge of LLMs (GPT, Claude, LLaMA, Mistral, etc.), prompt engineering, agent memory, tool calling, and autonomous task execution. Experience with RFP/RFI support, and proposal creation in a consulting or enterprise services environment. Understanding of enterprise solutioning with cloud platforms (AWS, Azure, GCP), API integration, and data security best practices. Exceptional communication and consulting skills, with the ability to present solutions to both technical and non-technical stakeholders.</p>
<p>Preferred Skills: Strong proficiency in programming languages such as Python, Java, Spring, Maven, JSON. Object Oriented Programming Hands-on exposure to cognitive architectures, planning-based agents, or reinforcement learning in real-world deployments. Experience integrating AI agents into enterprise apps like Salesforce, ServiceNow, SAP, or custom apps via APIs. Understanding of AI observability, performance monitoring, and ethical guidelines in GenAI systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, LangChain, CrewAI, AutoGen, LLM APIs, embeddings, API integration, data security best practices, RFP/RFI support, proposal creation, Java, Spring, Maven, JSON, Object Oriented Programming, cognitive architectures, planning-based agents, reinforcement learning, AI observability, performance monitoring, ethical guidelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/infosys.com.png</Employerlogo>
      <Employerdescription>A globally renowned management consulting firm that delivers real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture.</Employerdescription>
      <Employerwebsite>https://www.infosys.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/xpARkFa3XjTnbyVRmoHpEd/remote-genai-engineer-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8073098e-063</externalid>
      <Title>Agentic AI Architect</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>
<p>Job Overview:</p>
<p>Infosys Consulting is at the forefront of applied AI innovation, delivering real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture. As part of our growing Enterprise AI consulting practice, we are looking for technically hands-on professionals to design and deliver client-centric intelligent systems and support business growth through strategic pre-sales and solutioning initiatives.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and deploy autonomous AI agent ecosystems using frameworks such as LangChain, AutoGen, CrewAI, and Semantic Kernel.</li>
<li>Architect LLM-powered workflows involving multi-agent collaboration, decision logic, memory management, and external tool integration.</li>
<li>Collaborate with consulting teams to align AI agent solutions with business goals and industry use cases across sectors (FSI, Retail, Manufacturing, etc.).</li>
<li>Participate in RFI/RFP responses, creating high-impact solution overviews, architectural diagrams, and effort/cost estimations.</li>
<li>Work closely with AI Strategists, Engagement Managers, and Domain SMEs to define solution blueprints, MVP scopes, and transformation roadmaps.</li>
<li>Engage in client workshops, demos, and innovation showcases to articulate the potential of Agentic AI and its enterprise applications.</li>
<li>Contribute to the development of reusable agent templates, accelerators, and reference architectures within Infosys&#39; AI frameworks.</li>
<li>Stay current with GenAI advancements, toolchains, and research (LLMs, embeddings, vector DBs, agent planning/reasoning).</li>
<li>Provide technical mentorship and hands-on support to junior consultants, helping shape internal capability development.</li>
<li>Collaborate with cross-functional teams on AI governance, responsible AI practices, and integration into enterprise environments.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, AI, or related field. PhD preferred for architect-level roles.</li>
<li>8+ years of experience in AI/ML, including 5+ years as a Solution Architect and 4+ years of hands-on development with LLMs and autonomous AI agents</li>
<li>Strong experience with Python and orchestration libraries such as LangChain, LlamaIndex, Semantic Kernel, AutoGen, or similar.</li>
<li>Deep knowledge of LLMs (GPT, Claude, LLaMA, Mistral, etc.), prompt engineering, agent memory, tool calling, and autonomous task execution.</li>
<li>Experience with pre-sales, RFP/RFI support, and proposal creation in a consulting or enterprise services environment.</li>
<li>Understanding of enterprise solutioning with cloud platforms (AWS, Azure, GCP), API integration, and data security best practices.</li>
<li>Exceptional communication and consulting skills, with the ability to present solutions to both technical and non-technical stakeholders.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Hands-on exposure to cognitive architectures, planning-based agents, or reinforcement learning in real-world deployments.</li>
<li>Experience integrating AI agents into enterprise apps like Salesforce, ServiceNow, SAP, or custom apps via APIs.</li>
<li>Understanding of AI observability, performance monitoring, and ethical guidelines in GenAI systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, LangChain, AutoGen, CrewAI, Semantic Kernel, LLMs, prompt engineering, agent memory, tool calling, autonomous task execution, pre-sales, RFP/RFI support, proposal creation, cloud platforms, API integration, data security best practices, cognitive architectures, planning-based agents, reinforcement learning, AI observability, performance monitoring, ethical guidelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/infosys.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that delivers real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture.</Employerdescription>
      <Employerwebsite>https://www.infosys.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qRNKkoyRyMYbqe7zLDz6tb/remote-agentic-ai-architect-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>da3b8459-19d</externalid>
      <Title>Workday Business Systems Analyst, HCM</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented and driven Workday Business Systems Analyst to join our People Services team. You&#39;ll own the technical configuration, testing, and operational excellence of our Workday HRIS - with a particular focus on Core HCM, Benefits, Compensation, and Security.</p>
<p>As a Workday Business Systems Analyst, you&#39;ll support Workday configuration and implementation, working closely with HR, Finance, and other stakeholders to translate business requirements into effective system solutions. You&#39;ll design, test, and deploy Workday modules, integrations, and custom reports to support core business processes.</p>
<p>Responsibilities:</p>
<ul>
<li>Support Workday configuration and implementation, working closely with HR, Finance, and other stakeholders to translate business requirements into effective system solutions</li>
<li>Design, test, and deploy Workday modules, integrations, and custom reports to support core business processes</li>
<li>Collaborate with cross-functional teams to gather requirements, document processes, and implement solutions that align with organisational goals</li>
<li>Develop and maintain detailed documentation for system configurations, business processes, and technical integrations</li>
<li>Create and maintain Workday reports and dashboards to provide actionable insights to stakeholders</li>
<li>Support HR and Finance teams with day-to-day system troubleshooting and enhancement requests</li>
<li>Lead change management initiatives to ensure successful adoption of new features and workflows</li>
<li>Manage system security, role-based permissions, and compliance requirements - treating HR data protection as a non-negotiable foundation</li>
<li>Stay current with Workday updates and best practices to continuously improve system capabilities</li>
<li>Drive the continuous improvement of Workday business processes through regular system audits and stakeholder feedback</li>
<li>Partner with the broader People Services team and other COEs (Benefits, Comp, Talent, Payroll) to identify opportunities for process automation and system optimisation</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience as a Workday Business Systems Analyst, with demonstrated expertise in system configuration, testing, and implementation</li>
<li>Possess strong technical skills including understanding of Workday architecture, security models, and integration methodologies</li>
<li>Have experience with full lifecycle Workday implementations and/or significant module expansions</li>
<li>Expertise in Workday HCM, Benefits, advanced Compensation, and Security modules</li>
<li>Are intellectually curious - you ask “why” before “how,” explore root causes rather than symptoms, and stay current on Workday and adjacent HR tech because you want to, not because you have to</li>
<li>Approach problems with creativity, comfortable looking past the standard or obvious solution to find an answer that actually fits the business</li>
<li>Treat HR data security and privacy as a non-negotiable foundation in everything you build and maintain</li>
<li>Demonstrate excellent analytical and problem-solving abilities, with a talent for translating complex business requirements into effective technical solutions</li>
<li>Excel at building relationships and communicating effectively with technical and non-technical stakeholders at all levels</li>
<li>Are detail-oriented with strong documentation skills and a commitment to process excellence</li>
<li>Have a track record of managing multiple priorities in a fast-paced environment</li>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
<li>Have a collaborative mindset and willingness to pick up slack even if it goes outside your job description</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Workday certifications in HCM, Benefits, Compensation, Payroll, Absence, Time tracking, Talent, Performance and/or Learning</li>
<li>Experience with calculated fields and business process configurations</li>
<li>Understanding of Workday integrations: familiarity with Workday integration tools (EIB, Core Connectors, Workday Studio, Web Services / RaaS), inbound vs. outbound patterns, authentication (SSO, SFTP, API keys / certs), typical HR integration partners (benefits providers, payroll, ATS, IdP, background check), and disciplined error handling, reconciliation, and monitoring</li>
<li>Knowledge of complementary HR/Finance systems and integration patterns</li>
<li>Experience supporting Workday in a high-growth technology company</li>
<li>Familiarity with project management methodologies and tools</li>
<li>Background in change management and user adoption strategies</li>
<li>Understanding of data privacy regulations and security best practices</li>
</ul>
<p>The annual compensation range for this role is $205,000-$265,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Workday, HCM, Benefits, Compensation, Security, System configuration, Implementation, Integration, Custom reporting, Change management, Data security, Privacy, Analytical skills, Problem-solving, Communication, Documentation, Process excellence, Project management, User adoption, Workday certifications, Calculated fields, Business process configurations, Workday integrations, Complementary HR/Finance systems, High-growth technology company, Project management methodologies, Change management strategies, Data privacy regulations, Security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5194810008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1ccace61-279</externalid>
      <Title>Strategy and Operations, Forward Deployed Engineering (FDE)</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$216K – $240K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>OpenAI’s Forward Deployed Engineering (FDE) team partners with customers to turn research breakthroughs into production-grade AI systems. FDE sits at the intersection of Product, Engineering, Research, and GTM. We embed deeply with users to solve high-leverage problems and surface patterns that shape our platform. We take frontier capabilities into the real world and translate customer signal into durable solutions, repeatable patterns, and product direction.</p>
<p><strong>About the role</strong></p>
<p>We’re hiring an Strategy and Ops Lead to build, run, and evolve the systems that enable the FDE team to execute at scale. This role sits at the core of the team and directly shapes how effectively we deploy frontier AI in the real world. You’ll turn fast-moving signals from the field and the business into clear operational plans, aligning project demand with FDE capacity, driving staffing decisions, and ensuring the portfolio scales predictably.</p>
<p>You will partner closely with Business, Product, and GTM stakeholders to improve how we prioritize, plan, and coordinate. Rather than leading a single program, you’ll run core operating rhythms for the team, such as portfolio reviews, execution tracking, and quarterly planning, ensuring leaders have clear visibility into risks and delivery progress as the organization scales. This is a senior IC role with broad ownership across the FDE operating model.</p>
<p><strong>In this role you will</strong></p>
<ul>
<li>Own FDE capacity planning, translating pipeline and active project demand into hiring forecasts.</li>
</ul>
<ul>
<li>Run the operating rhythm across portfolio reviews and quarterly planning, ensuring leaders have visibility into priorities, risk, dependencies, and the decisions needed to keep execution moving.</li>
</ul>
<ul>
<li>Determine how customer engagements should be staffed across FDE and partner channels, working with GTM and FDE leadership to make explicit tradeoff calls based on scope, strategic value, and capacity constraints.</li>
</ul>
<ul>
<li>Codify and evolve the FDE operating model so each subsequent deployment becomes easier to scope and deliver.</li>
</ul>
<ul>
<li>Identify and resolve emerging operational bottlenecks as FDE scales, implementing lightweight systems that improve execution without adding unnecessary overhead.</li>
</ul>
<p><strong>You might thrive in this role if you</strong></p>
<ul>
<li>Bring 6+ years in technical program management, engineering operations, business operations, or similar operator roles supporting technical teams in fast-paced, high-ambiguity environments.</li>
</ul>
<ul>
<li>Have built 0→1 operating mechanisms that scaled a technical team through rapid growth.</li>
</ul>
<ul>
<li>Bring alignment to conflicting priorities and resource tradeoffs, driving teams toward measurable outcomes at pace.</li>
</ul>
<ul>
<li>Break down ambiguous operational challenges into clear workstreams, anticipate risks early, and make sound decisions under pressure while balancing speed with long-term system health.</li>
</ul>
<ul>
<li>Communicate clearly across engineering, product, GTM, and executive audiences, simplifying complexity and translating tradeoffs into actionable decisions.</li>
</ul>
<ul>
<li>Influence senior leaders without formal authority, aligning teams with different incentives around clear, shared outcomes.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Benefits</strong></p>
<p><strong>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</strong></p>
<p><strong>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</strong></p>
<p><strong>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</strong></p>
<p><strong>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</strong></p>
<p><strong>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-an</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216K – $240K • Offers Equity</Salaryrange>
      <Skills>technical program management, engineering operations, business operations, similar operator roles, fast-paced, high-ambiguity environments, capacity planning, pipeline and active project demand, hiring forecasts, portfolio reviews, quarterly planning, execution tracking, operating rhythm, leadership, GTM, FDE, operating model, staffing decisions, project demand, FDE capacity, customer engagements, partner channels, scope, strategic value, capacity constraints, operational bottlenecks, lightweight systems, execution, communication, complexity, tradeoffs, actionable decisions, influence, senior leaders, shared outcomes, AI research, deployment, general-purpose artificial intelligence, human needs, safety, data security, information technology systems, data security obligations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/976939e9-e072-4a24-abdb-84cf29a564c6</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1c1e443b-ed5</externalid>
      <Title>Workday Business Systems Analyst, HCM</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented and driven Workday Business Systems Analyst to join our People Services team. You&#39;ll own the technical configuration, testing, and operational excellence of our Workday HRIS - with a particular focus on Core HCM, Benefits, Compensation, and Security.</p>
<p>We&#39;re looking for someone who is naturally curious and an exceptional problem solver - the kind of person who looks beyond the obvious or &#39;standard&#39; Workday answer, asks why before how, and gets energy from figuring out how systems should work, not just how they currently work. You&#39;ll partner closely with the broader People Services team, our COE partners (Benefits, Comp, Talent, Payroll), and People Analytics to make sure our HRIS is secure, scalable, and a true business enabler.</p>
<p>Responsibilities:</p>
<ul>
<li>Support Workday configuration and implementation, working closely with HR, Finance, and other stakeholders to translate business requirements into effective system solutions</li>
<li>Design, test, and deploy Workday modules, integrations, and custom reports to support core business processes</li>
<li>Collaborate with cross-functional teams to gather requirements, document processes, and implement solutions that align with organisational goals</li>
<li>Develop and maintain detailed documentation for system configurations, business processes, and technical integrations</li>
<li>Create and maintain Workday reports and dashboards to provide actionable insights to stakeholders</li>
<li>Support HR and Finance teams with day-to-day system troubleshooting and enhancement requests</li>
<li>Lead change management initiatives to ensure successful adoption of new features and workflows</li>
<li>Manage system security, role-based permissions, and compliance requirements - treating HR data protection as a non-negotiable foundation</li>
<li>Stay current with Workday updates and best practices to continuously improve system capabilities</li>
<li>Drive the continuous improvement of Workday business processes through regular system audits and stakeholder feedback</li>
<li>Partner with the broader People Services team and other COEs (Benefits, Comp, Talent, Payroll) to identify opportunities for process automation and system optimisation</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience as a Workday Business Systems Analyst, with demonstrated expertise in system configuration, testing, and implementation</li>
<li>Possess strong technical skills including understanding of Workday architecture, security models, and integration methodologies</li>
<li>Have experience with full lifecycle Workday implementations and/or significant module expansions</li>
<li>Expertise in Workday HCM, Benefits, advanced Compensation, and Security modules</li>
<li>Are intellectually curious - you ask &#39;why&#39; before &#39;how&#39;, explore root causes rather than symptoms, and stay current on Workday and adjacent HR tech because you want to, not because you have to</li>
<li>Approach problems with creativity, comfortable looking past the standard or obvious solution to find an answer that actually fits the business</li>
<li>Treat HR data security and privacy as a non-negotiable foundation in everything you build and maintain</li>
<li>Demonstrate excellent analytical and problem-solving abilities, with a talent for translating complex business requirements into effective technical solutions</li>
<li>Excel at building relationships and communicating effectively with technical and non-technical stakeholders at all levels</li>
<li>Are detail-oriented with strong documentation skills and a commitment to process excellence</li>
<li>Have a track record of managing multiple priorities in a fast-paced environment</li>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
<li>Have a collaborative mindset and willingness to pick up slack even if it goes outside your job description</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Workday certifications in HCM, Benefits, Compensation, Payroll, Absence, Time tracking, Talent, Performance and/or Learning</li>
<li>Experience with calculated fields and business process configurations</li>
<li>Understanding of Workday integrations: familiarity with Workday integration tools (EIB, Core Connectors, Workday Studio, Web Services / RaaS), inbound vs. outbound patterns, authentication (SSO, SFTP, API keys / certs), typical HR integration partners (benefits providers, payroll, ATS, IdP, background check), and disciplined error handling, reconciliation, and monitoring</li>
<li>Knowledge of complementary HR/Finance systems and integration patterns</li>
<li>Experience supporting Workday in a high-growth technology company</li>
<li>Familiarity with project management methodologies and tools</li>
<li>Background in change management and user adoption strategies</li>
<li>Understanding of data privacy regulations and security best practices</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $205,000-$265,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Workday, HCM, Benefits, Compensation, Security, System configuration, Testing, Implementation, Business analysis, Data security, Privacy, Change management, Project management, Workday certifications, Calculated fields, Business process configurations, Workday integrations, Complementary HR/Finance systems, Integration patterns, High-growth technology company, Project management methodologies, User adoption strategies, Data privacy regulations, Security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5194810008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>492042ed-9ee</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
<li>Ship high-quality, well-tested, secure, and maintainable code.</li>
<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
<li>3+ years experience with data governance, data compliance and/or data security.</li>
<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>
<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>
<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>
<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>
<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company headquartered in Redmond, Washington.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-6/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2fa970ee-3db</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience. Preferred Qualifications: 4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL. Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc. 3+ years experience with data governance, data compliance and/or data security. 2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP. Extensive use datastores like RDBMS, key-value stores, etc. 2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders. Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-4/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e0e12720-cee</externalid>
      <Title>Vice President of Product Management, Cloudflare One</Title>
      <Description><![CDATA[<p>About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Available Locations: Austin, TX</p>
<p>About the role</p>
<p>Cloudflare One is our SASE platform that is redefining how the world&#39;s most sophisticated organisations secure and connect their global workforce. By unifying Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Wide Area Network (WAN) and Cloud Access Security Broker (CASB) as well as Email Security into a single, seamless SASE architecture, we are moving the industry beyond the legacy &#39;castle-and-moat&#39; era.</p>
<p>What you&#39;ll do</p>
<p>We are looking for a product leader to lead the Cloudflare One portfolio. You are not someone to simply manage a steady ship; you are someone to invent the next generation of Access, Gateway, and WAN, unlocking new Total Addressable Markets (TAM) in the SASE space while executing your way into being the market leader for the current Service Addressable Market (SAM).</p>
<p>Desirable Skills, Knowledge, and Experience:</p>
<ul>
<li>A deep empathy for customers and an understanding of the fundamental shifts happening in technology, the SSE and SASE space, especially around AI</li>
</ul>
<ul>
<li>Proven experience in a senior leadership role (e.g., former founder, Sr. Director, VP, CTO), having built and scaled both products and teams</li>
</ul>
<ul>
<li>An entrepreneurial spirit with enterprise etiquette and a history of building things, whether in a successful venture or a passionate side project</li>
</ul>
<ul>
<li>Proven ability to pivot instantly from technical deep-dives to high-level strategic discussions</li>
</ul>
<p>Must-Have Skills</p>
<ul>
<li>12+ years of experience in product management or a closely related role, with a focus on network and data security technologies (e.g., ZTNA, Gateways, DLP, CASB, data encryption, data classification, data privacy)</li>
</ul>
<ul>
<li>Own the SASE product roadmap. Make tough tactical prioritization decisions while helping the company think long-term. Build trust with stakeholders by maintaining an understandable, accurate roadmap</li>
</ul>
<ul>
<li>Partner with leaders in other departments such as Product Marketing, Marketing, Sales, and Customer Support to drive adoption with and gather feedback from customers and prospects</li>
</ul>
<ul>
<li>Develop and nurture relationships with engineering leadership and coordinate closely to ensure successful delivery of product</li>
</ul>
<ul>
<li>Deep understanding of the network and security landscapes, including current trends and threats, regulatory requirements, and emerging technologies</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>An understanding and real-world experience working with all types of channel partners (VARs, MSP, MSSP, etc)</li>
</ul>
<ul>
<li>A Computer Science degree or equivalent in-seat experience in product roles for technical products is highly preferred</li>
</ul>
<ul>
<li>Experience leading product integrations during post-merger acquisitions (M&amp;A)</li>
</ul>
<ul>
<li>Deep familiarity with Zero Trust architecture and SD-WAN markets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Product management, Network and data security technologies, Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Wide Area Network (WAN), Cloud Access Security Broker (CASB), Email Security, SASE architecture, Total Addressable Markets (TAM), Service Addressable Market (SAM), AI, Machine learning, Computer Science, Product marketing, Marketing, Sales, Customer support, Engineering leadership, Channel partners, VARs, MSP, MSSP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a range of products and services to help protect and accelerate internet applications. It has a large network that powers millions of websites and other internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7630417</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d11da63-16c</externalid>
      <Title>Public Sector Account Executive (Central Government)</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Public Sector Account Executive to join our team in the UK. As a Public Sector Account Executive, you will be responsible for generating and developing pipeline through a disciplined multi-channel, multi-touch prospecting approach. You will act as a hunter, identifying new opportunities across departments and agencies and building relationships with both senior leaders and technical practitioners. You will lead structured discovery conversations to understand mission needs, data challenges, and operational priorities within government organisations. You will position Elastic&#39;s capabilities across Search AI, Observability, and Security to help departments improve digital services, strengthen security posture, and unlock the value of their data. You will work closely with solutions architects, partners, and customer success teams to develop strategies that address complex public sector challenges. You will expand Elastic&#39;s footprint within accounts through strategic land-and-expand motions, identifying new use cases and opportunities. You will maintain accurate pipeline management and forecasting within Salesforce. You will collaborate across Elastic teams to ensure we deliver meaningful outcomes for customers and grow our presence across government.</p>
<p>We&#39;re looking for someone with 3 years+ experience selling into the UK Public Sector, ideally with exposure to central government departments such as Department for Transport, Defra, or devolved governments. You should have a hunter mentality with strong energy, resilience, and drive to build pipeline and create new opportunities. You should have curiosity and creativity in tackling complex government challenges involving data, security, and digital transformation. You should have strong business and technical curiosity, with the ability to engage both senior stakeholders and technical practitioners. You should have a collaborative mindset with the ability to work effectively across distributed teams. You should have a structured and disciplined approach to sales, combined with the ability to think creatively and challenge conventional approaches. You should be motivated to succeed in a fast-moving, ambitious environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>prospecting, pipeline development, sales strategy, customer success, public sector sales, government sales, data security, digital transformation, search AI, observability, security, solution architecture, partnerships, customer engagement</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7728182</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2e21f6d4-2c2</externalid>
      <Title>Senior Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our team in designing and implementing next-generation systems for database engine internals. You will work on query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures, and automatic physical data optimization.</p>
<p>Our ideal candidate has a passion for database systems, storage systems, distributed systems, language design, or performance optimization, with 5+ years of experience working in a related system. A PhD in databases or distributed systems is optional.</p>
<p>As a member of our team, you will be motivated by delivering customer value and impact, and will have the opportunity to work on a multi-year vision with incremental deliverables.</p>
<p>The pay range for this role is $166,000-$225,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimization, query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures, automatic physical data optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5048461002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>57e69ea3-506</externalid>
      <Title>Director of Product Management, Data Security</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Director of Product Management, Data Security, you will guide product development through all stages of the product life cycle: identifying customer needs and developing, launching, and sustaining products, shaping the next generation of Cloudflare’s offerings. You will represent Cloudflare to customers, vendors, and partners, synthesizing information and communicating what you learn to internal stakeholders. You will build, maintain, and develop a world-class team of product managers and leaders who deeply care about Cloudflare’s role in helping build a better Internet.</p>
<p>Responsibilities</p>
<ul>
<li>Define the roadmap, drive the development, and ensure the successful delivery of innovative data security capabilities that integrate seamlessly into Cloudflare One, Cloudflare’s SASE platform.</li>
<li>Partner with leaders in other departments such as Product Marketing, Marketing, Sales, and Customer Support to drive adoption with and gather feedback from customers and prospects.</li>
<li>Develop and nurture relationships with engineering leadership and coordinate closely to ensure successful delivery of product.</li>
<li>Deep understanding of the data security landscape, including current threats, regulatory requirements, and emerging technologies.</li>
<li>A strong passion for and a clear vision for how AI and Machine Learning can fundamentally transform data security solutions in the modern era.</li>
</ul>
<p>Requirements</p>
<ul>
<li>10+ years of experience in product management or a closely related role, with a focus on data security technologies (e.g., DLP, DSPM, CASB, data encryption, data classification, data privacy).</li>
<li>Deep understanding of the data security landscape, including current threats, regulatory requirements, and emerging technologies.</li>
<li>A strong passion for and a clear vision for how AI and Machine Learning can fundamentally transform data security solutions in the modern era.</li>
<li>Exceptional empathy, curiosity, attention to detail, and problem-solving abilities.</li>
<li>Experience building and leading high-performing product teams.</li>
<li>Excellent communication, presentation, and interpersonal skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills, Knowledge, and Experience</p>
<ul>
<li>Experience with large-scale distributed systems and cloud-native architectures.</li>
<li>Familiarity with Zero Trust principles and architectures.</li>
<li>Active participation in industry conferences, publications, or open-source projects related to data security or AI in security.</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or a related technical field; advanced degree (MBA, Master&#39;s) welcomed but not required.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data security technologies, DLP, DSPM, CASB, data encryption, data classification, data privacy, large-scale distributed systems, cloud-native architectures, Zero Trust principles, AI in security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare provides cloud-based services to protect and accelerate internet applications. It runs one of the world&apos;s largest networks, powering millions of websites and other internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7021759</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>039ed16f-87b</externalid>
      <Title>Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>We are seeking a talented Software Engineer to join our team in Belgrade, Serbia. As a member of our engineering team, you will be responsible for designing and developing next-generation systems for database engine internals. Your work will focus on query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures, and automatic physical data optimization.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive requirements clarity and design decisions for ambiguous problems</li>
<li>Produce technical design documents and project plans</li>
<li>Develop new features</li>
<li>Mentor more junior engineers</li>
<li>Test and rollout to production, monitoring</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Passion for database systems, storage systems, distributed systems, language design, or performance optimization</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Customer-oriented and focused on having an impact</li>
<li>5+ years of experience working in a related system</li>
<li>Optional: PhD in databases or distributed systems</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimization, query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures, automatic physical data optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8012658002</Applyto>
      <Location>Belgrade, Serbia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35f3735f-43a</externalid>
      <Title>Senior Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>Our mission at Databricks is to simplify the data lifecycle from ingestion to ETL, BI, and ML/AI with a unified platform.</p>
<p>To achieve this goal, we believe the data warehouse architecture as we know it today will be replaced by a new architectural pattern, Lakehouse, open platforms that unify data warehousing and advanced analytics.</p>
<p>A critical part of realizing this vision is the next generation (decoupled) query engine and structured storage system that can outperform specialized data warehouses in relational query performance, yet retain the expressiveness and of general purpose systems such as Apache Spark™ to support diverse workloads ranging from ETL to data science.</p>
<p>As part of this team, you will be working in one or more of the following areas to design and implement these next gen systems that leapfrog state-of-the-art:</p>
<ul>
<li>Query compilation and optimization</li>
<li>Distributed query execution and scheduling</li>
<li>Vectorized execution engine</li>
<li>Data security</li>
<li>Resource management</li>
<li>Transaction coordination</li>
<li>Efficient storage structures (encodings, indexes)</li>
<li>Automatic physical data optimization</li>
</ul>
<p>We look for:</p>
<ul>
<li>A passion for database systems, storage systems, distributed systems, language design, or performance optimization</li>
<li>Experience working towards a multi-year vision with incremental deliverables</li>
<li>Motivated by delivering customer value and impact</li>
<li>5+ years of experience working in a related system (preferred)</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $166,000-$225,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimization, query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544383002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6af2312-b49</externalid>
      <Title>Systems PhD - Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Systems PhD - Software Engineer to join our Database Engine team. As a member of this team, you&#39;ll have opportunities to design and implement in many areas that leapfrog existing state-of-the-art systems, including query compilation &amp; optimisation, distributed query execution and scheduling, vectorised engine execution, data security, resource management, transaction coordination, efficient storage structures, and automatic physical data optimisation.</p>
<p>To be successful in this role, you&#39;ll need a PhD in databases or systems, a passion for database systems, storage systems, distributed systems, language design, and/or performance optimisation, and be motivated by delivering customer value and impact.</p>
<p>The pay range for this role is $140,000-$180,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$140,000-$180,000 USD</Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimisation, query compilation &amp; optimisation, distributed query execution and scheduling, vectorised engine execution, data security, resource management, transaction coordination, efficient storage structures, automatic physical data optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that serves over 10,000 organisations worldwide, processing exabytes of data daily on 15+ million VMs.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482086002</Applyto>
      <Location>Bellevue, Washington; Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f06742a2-a51</externalid>
      <Title>Senior Software Engineer (Data Platform)</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. Our engineering teams build technical products that fulfill real, important needs in the world. We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>
<p>As a Senior Software Engineer working on the Data Platform team, you will help build the Data Intelligence Platform for Databricks that will allow us to automate decision-making across the entire company. You will achieve this in collaboration with Databricks Product Teams, Data Science, Applied AI and many more. You will develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, data consumption layers etc. You will do this using the latest, bleeding-edge Databricks product and other tools in the data ecosystem - the team also functions as a large, production, in-house customer that dog foods Databricks and guides the future direction of the product.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and run the Databricks metrics store that enables all business units and engineering teams to bring their detailed metrics into a common platform for sharing and aggregation, with high quality, introspection ability and query performance.</li>
</ul>
<ul>
<li>Design and run the cross-company Data Intelligence Platform, which contains every business and product metric used to run Databricks. You’ll play a key role in developing the right balance of data protections and ease of shareability for the Data Intelligence Platform as we transition to a public company.</li>
</ul>
<ul>
<li>Develop tooling and infrastructure to efficiently manage and run Databricks on Databricks at scale, across multiple clouds, geographies and deployment types. This includes CI/CD processes, test frameworks for pipelines and data quality, and infrastructure-as-code tooling.</li>
</ul>
<ul>
<li>Design the base ETL framework used by all pipelines developed at the company.</li>
</ul>
<ul>
<li>Partner with our engineering teams to provide leadership in developing the long-term vision and requirements for the Databricks product.</li>
</ul>
<ul>
<li>Build reliable data pipelines and solve data problems using Databricks, our partner’s products and other OSS tools. Provide early feedback on the design and operations of these products.</li>
</ul>
<ul>
<li>Establish conventions and create new APIs for telemetry, debug, feature and audit event log data, and evolve them as the product and underlying services change.</li>
</ul>
<ul>
<li>Represent Databricks at academic and industrial conferences &amp; events.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL frameworks, metrics stores, infrastructure management, data security, large-scale messaging systems, workflow or orchestration frameworks, Airflow, DBT, Kafka, RabbitMQ</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks develops and operates a data and AI infrastructure platform for businesses.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7647369002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8d587b18-3f1</externalid>
      <Title>Staff Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our team in Belgrade, Serbia. As a Staff Software Engineer, you will be part of a multi-year journey to design and build the next-generation query engine and structured storage system.</p>
<p>Our mission at Databricks is to simplify the data lifecycle from ingestion to ETL, BI, and ML/AI with a unified platform. We believe the data warehouse architecture will be replaced by a new architectural pattern, Lakehouse, which unifies data warehousing and advanced analytics.</p>
<p>The new architecture will help address several major challenges, including data staleness, reliability, total cost of ownership, data lock-in, and limited use-case support. To achieve this vision, we are building a decoupled query engine and structured storage system that can outperform specialized data warehouses in relational query performance.</p>
<p>As a Staff Software Engineer, you will design these next-generation systems that leapfrog state-of-the-art within the following areas:</p>
<ul>
<li>Query compilation and optimization</li>
<li>Distributed query execution and scheduling</li>
<li>Vectorized execution engine</li>
<li>Data security</li>
<li>Resource management</li>
<li>Transaction coordination</li>
<li>Efficient storage structures (encodings, indexes)</li>
<li>Automatic physical data optimization</li>
</ul>
<p>Your responsibilities will include driving requirements clarity and design decisions for ambiguous problems, producing technical design documents and project plans, developing new features, mentoring more junior engineers, testing and rolling out to production, and monitoring.</p>
<p>We look for a passion for database systems, storage systems, distributed systems, language design, or performance optimization. You should be comfortable working towards a multi-year vision with incremental deliverables, be customer-oriented and focused on having an impact, and have 7+ years of experience working in a related system. A PhD in databases or distributed systems is optional.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimization, query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures, automatic physical data optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 clients worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8012818002</Applyto>
      <Location>Belgrade, Serbia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bdf949b3-c66</externalid>
      <Title>Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>
<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>
<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>
<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>
<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>
<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>
<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>
<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>
<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>
<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>
<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>
<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>
<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>
<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>
<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>
<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>
<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>
<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>
<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>
<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>
<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>
<li>Advise executive leadership on security architecture, risks, and mitigation.</li>
<li>Mentor security engineers and developers on secure design and best practices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>
<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>
<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>
<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>
<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>
<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>
<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>
<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>
<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>
<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>
<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>
<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>
<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>
<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>
<li>Ability to influence stakeholders and drive alignment.</li>
<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a technology company that provides a cloud-based platform for data analytics and artificial intelligence.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8207910002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9661d798-56c</externalid>
      <Title>Staff Software Engineer (Data Platform)</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Data Platform team. As a Staff Software Engineer, you will help build the Data Intelligence Platform for Databricks, allowing us to automate decision-making across the entire company. You will collaborate with Databricks Product Teams, Data Science, Applied AI, and other teams to develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, and data consumption layers. You will use the latest Databricks product and other tools in the data ecosystem to design and run the Databricks metrics store, cross-company Data Intelligence Platform, and tooling and infrastructure to efficiently manage and run Databricks at scale.</p>
<p>The impact you will have includes designing and running the Databricks metrics store, cross-company Data Intelligence Platform, and developing tooling and infrastructure to efficiently manage and run Databricks at scale. You will also design the base ETL framework used by all pipelines developed at the company, partner with engineering teams to provide leadership in developing the long-term vision and requirements for the Databricks product, and establish conventions and create new APIs for telemetry, debug, feature, and audit event log data.</p>
<p>To be successful in this role, you will need 12+ years of industry experience, 4+ years of experience building large-scale distributed systems, and 5+ years of providing technical leadership on large projects similar to the ones described above. You will also need experience with ETL frameworks, metrics stores, infrastructure management, data security, and experience building, shipping, and operating reliable multi-geo data pipelines at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL frameworks, metrics stores, infrastructure management, data security, large-scale distributed systems, technical leadership, data pipelines, workflow or orchestration frameworks, messaging systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 organisations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7652016002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65bcd8f4-7e6</externalid>
      <Title>Staff Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>Our mission at Databricks is to radically simplify the whole data lifecycle from ingestion to ETL, BI, and all the way up to ML/AI with a unified platform.</p>
<p>To achieve this goal, we believe the data warehouse architecture as we know it today will be replaced by a new architectural pattern, Lakehouse (CIDR 2021 paper), open platforms that unify data warehousing and advanced analytics.</p>
<p>A critical part of realizing this vision is the next generation (decoupled) query engine and structured storage system that can outperform specialised data warehouses in relational query performance, yet retain the expressiveness and of general purpose systems such as Apache Spark™ to support diverse workloads ranging from ETL to data science.</p>
<p>As part of this team, you will be working in one or more of the following areas to design and implement these next gen systems that leapfrog state-of-the-art:</p>
<ul>
<li>Query compilation and optimisation</li>
<li>Distributed query execution and scheduling</li>
<li>Vectorised execution engine</li>
<li>Data security</li>
<li>Resource management</li>
<li>Transaction coordination</li>
<li>Efficient storage structures (encodings, indexes)</li>
<li>Automatic physical data optimisation</li>
</ul>
<p>We look for:</p>
<ul>
<li>A passion for database systems, storage systems, distributed systems, language design, or performance optimisation</li>
<li>Experience working towards a multi-year vision with incremental deliverables</li>
<li>Motivated by delivering customer value and impact</li>
<li>8+ years of experience working in a related system (preferred)</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilising the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $192,000-$260,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimisation, query compilation, optimisation, distributed query execution, scheduling, vectorised execution engine, data security, resource management, transaction coordination, efficient storage structures, encodings, indexes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company with over 10,000 organisations worldwide relying on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5646866002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>05a93595-c52</externalid>
      <Title>Staff Product Manager, Security</Title>
      <Description><![CDATA[<p>At Databricks, we are seeking a Staff Product Manager, Security to lead all aspects of data security for the Databricks Lakehouse platform. As a key member of our product management team, you will drive the product strategy to identify, prioritize, and deliver both external and internal controls necessary for customers to meet their privacy, confidentiality, digital sovereignty, and other regulatory obligations.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Creating elegant yet simple policy models that enable customers to control access to their most important assets.</li>
<li>Building a vision for a best-in-class network and data security policy model that provides secure access to customer workloads and data while minimizing exposure to unauthorized access and risk of data loss.</li>
<li>Partnering with privacy, legal, and security teams to deliver anonymization, retention, and residency for regulated data and sensitive assets such as AI models.</li>
<li>Deepening our understanding of evolving threats and corresponding technologies in data security.</li>
<li>Using customer inputs to serve as an expert and propose new innovations or best practices from the industry.</li>
</ul>
<p>In this role, you will report to a Senior Director of Product Management and will be responsible for driving business results through influential leadership across multiple partner teams including engineering, sales, marketing, solution architects, and customer success.</p>
<p>We are looking for a seasoned product manager with 7+ years of experience in delivering successful customer-facing products and features, particularly in the areas of data security, data privacy, and encryption.</p>
<p>If you have a passion for designing products that simplify user experience of technically complex products, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$228,000 USD</Salaryrange>
      <Skills>data security, data privacy, encryption, product management, customer success, engineering leadership, cloud security, network security, cybersecurity, compliance, regulatory affairs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7110509002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90b4c10b-948</externalid>
      <Title>Staff Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>Our mission at Databricks is to simplify the data lifecycle from ingestion to ETL, BI, and all the way up to ML/AI with a unified platform.</p>
<p>To achieve this goal, we believe the data warehouse architecture as we know it today will be replaced by a new architectural pattern, Lakehouse, open platforms that unify data warehousing and advanced analytics.</p>
<p>A critical part of realizing this vision is the next generation (decoupled) query engine and structured storage system that can outperform specialized data warehouses in relational query performance, yet retain the expressiveness and of general purpose systems such as Apache Spark to support diverse workloads ranging from ETL to data science.</p>
<p>As part of this team, you will be working in one or more of the following areas to design and implement these next gen systems that leapfrog state-of-the-art:</p>
<ul>
<li>Query compilation and optimization</li>
<li>Distributed query execution and scheduling</li>
<li>Vectorized execution engine</li>
<li>Data security</li>
<li>Resource management</li>
<li>Transaction coordination</li>
<li>Efficient storage structures (encodings, indexes)</li>
<li>Automatic physical data optimization</li>
</ul>
<p>We look for individuals with a passion for database systems, storage systems, distributed systems, language design, or performance optimization. You should have experience working towards a multi-year vision with incremental deliverables, motivated by delivering customer value and impact.</p>
<p>The pay range for this role is $192,000-$260,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimization, query compilation and optimization, distributed query execution and scheduling, vectorized execution engine, data security, resource management, transaction coordination, efficient storage structures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544386002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e65d4ce5-634</externalid>
      <Title>Staff Product Manager, Security</Title>
      <Description><![CDATA[<p>At Databricks, we are committed to helping data teams solve the world&#39;s toughest problems. As a Staff Product Manager, Security, you will lead all aspects of data security for the Databricks Lakehouse platform. You will drive the product strategy to identify, prioritize and deliver both the external and internal controls necessary for customers to meet their privacy, confidentiality, digital sovereignty and other regulatory obligations.</p>
<p>Your work will help IT and security administrators feel at ease with increasing Databricks usage in their organization. Our data champions will benefit from the data security and operational simplicity you provide, to ensure that they can focus their time on creating most value for their users.</p>
<p>The impact you will have:</p>
<ul>
<li>Make security a market differentiator that enables Databricks to succeed in enterprise and regulated industries.</li>
<li>Help Databricks gain the trust of CSO teams of security sensitive customers and qualify the Lakehouse platform for all data classes in the customer organization.</li>
<li>Build a vision for a best-in-class network and data security policy model that provides secure access to customer workloads and data while minimizing exposure to unauthorized access and risk of data loss.</li>
<li>Partner with privacy, legal, and security teams to deliver anonymization, retention, and residency for regulated data and sensitive assets such as AI models.</li>
<li>Deepen our understanding of evolving threats and corresponding technologies in data security. Use customer inputs to serve as an expert and propose new innovations or best practices from the industry.</li>
<li>Partner with cloud providers and support confidential compute - Rationalize and refine the differences in customer experience and data security technologies across multiple clouds including Azure, AWS and Google Cloud.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of product management experience with enterprise B2B or SaaS products, with experience delivering successful customer-facing products/features.</li>
<li>Domain expertise in data security, data privacy, and encryption</li>
<li>An accomplished senior product manager who can define and deliver on success metrics, summarize customer inputs into actionable requirements, and crisply communicate product value propositions to customers and sales teams.</li>
<li>Experience partnering with senior engineering leadership with an ability to deep-dive into complex technical concepts.</li>
<li>Experience driving business results through influential leadership across multiple partner teams including engineering, sales, marketing, solution architects and customer success.</li>
<li>Passion for designing products that simplify user experience of technically complex products.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,700-$240,000 USD</Salaryrange>
      <Skills>data security, data privacy, encryption, product management, enterprise B2B or SaaS products, customer-facing products/features, success metrics, customer inputs, actionable requirements, product value propositions, customers and sales teams, senior engineering leadership, complex technical concepts, business results, influential leadership, partner teams, engineering, sales, marketing, solution architects, customer success</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7110499002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c26cebc9-fe8</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>As an Enterprise Account Executive at VGS, you will develop relationships with new prospective customers and channel partners. You will engage in business-level conversations across the customer&#39;s organization, from the CTO, COO, CISO, CEO level to talking with Sales or Engineering.</p>
<p>Your responsibilities will include:</p>
<p>Collaborating closely with management to plan and implement a strategic outreach plan
Developing and implementing best practices for growing VGS&#39;s customer base and strategic partnerships
Creating new business by aggressively finding, qualifying, and closing leads
Identifying and capitalizing on up-sell and cross-sale opportunities
Interface and nurture leads through various outbound channels including: phone, email, social media
Working closely with Product and Engineering teams to drive product and business strategy</p>
<p>To be successful in this role, you will need:</p>
<p>BS/BA or a minimum of 5+ years of experience in Sales or Business Development role
Experience managing inbound and outbound sales
Exceptional ability to develop and maintain client relationships
Technical expertise required
Desire to help build a world-class sales organization
Willingness to take ownership and execute
Comfortable with selling a technical product to a technical audience
Able to work and develop relationships with upper management and engineering
Strong communication skills and excellent organizational ability
Thoughtful and decisive judgment with the ability to work well in a dynamic and fast-paced environment
Prior Business Development experience in Fintech and/or in Data Security is desired
Prior experience at an emerging SaaS or Software entity</p>
<p>In addition to a competitive salary, you will also receive:
Flexible work hours and flexible PTO
Competitive health benefits
VGS stock options
401k plan, with employer matching 4% and immediate vesting (available only for US employees)
Life &amp; disability insurance
Pre-tax flexible spending accounts, dependent and healthcare FSA (available only for US employees)
Global parental leave program
Employee Assistance Program
Home Internet reimbursement
New hire home office set-up allowance
Professional learning reimbursement</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Sales, Business Development, Client Relationship Management, Technical Expertise, Communication, Organizational Ability, Fintech, Data Security, Emerging SaaS or Software</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>VGS</Employername>
      <Employerlogo>https://logos.yubhub.co/vgs.com.png</Employerlogo>
      <Employerdescription>VGS is the world&apos;s leader in payment tokenization, providing processor-agnostic tokenization solutions to large banks, fintechs, and merchants.</Employerdescription>
      <Employerwebsite>https://www.vgs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/verygoodsecurity/6778619f-29ac-40b8-bbda-0c42b2ea5264</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5e2a7198-49b</externalid>
      <Title>VP Engineering</Title>
      <Description><![CDATA[<p><strong>VP Engineering at Kody</strong></p>
<p>Kody is scaling rapidly, and we’re looking for an exceptional VP of Engineering to lead our technical organisation through the next phase of growth. As a key member of the executive team, you’ll play a pivotal role in shaping our engineering strategy, culture, and execution. This is a role for someone who thrives in a high velocity, global and purpose driven environment.</p>
<p>As VP of Engineering, you will be responsible for building and leading a high-performing engineering organisation aligned with Kody’s values and mission. You’ll define the technical vision, guide architectural decisions and ensure the delivery of scalable, secure and reliable platforms and products. You’ll work cross-functionally with Product, Design, and Operations to bring our roadmap to life while fostering a strong engineering culture rooted in technical excellence and continuous improvement.</p>
<p><strong>What You’ll Be Doing:</strong></p>
<ul>
<li>Lead the recruitment, development, and retention of a world-class engineering team.</li>
<li>Setting and executing the engineering vision and technical direction in partnership with executive and product leadership.</li>
<li>Defining and delivering engineering strategy that enables scalable and secure growth of Kody’s platform and services.</li>
<li>Drive engineering execution to ensure timely, high-quality delivery across all teams.</li>
<li>Establish and monitor KPIs to track performance, code quality, system stability and deployment efficiency.</li>
<li>Promote a culture of feedback, mentorship and career development through structured reviews and ongoing coaching.</li>
<li>Collaborate closely with Product, Design, and Operations to align on priorities and roadmap planning.</li>
<li>Champion modern development practices including CI/CD, testing, observability, and agile methodologies.</li>
<li>Oversee architectural planning and ensure sound technical decision-making across backend, frontend, and infrastructure.</li>
<li>Manage engineering budgets, headcount planning and resource allocation in alignment with company goals.</li>
<li>Advocate for engineering needs and priorities at the leadership level, balancing business goals and technical feasibility.</li>
<li>Maintain and evolve onboarding processes, documentation, and workflows to support rapid team scaling.</li>
<li>Embed data security, privacy, and regulatory compliance into the Engineering culture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Proven experience leading engineering teams at high-growth tech companies or scale-ups (ideally in fintech or related industries).</li>
<li>Deep technical expertise across modern architectures and cloud-based platforms.</li>
<li>Strong track record of managing and growing high-performing, distributed engineering teams.</li>
<li>Excellent leadership, communication, and cross-functional collaboration skills.</li>
<li>Passion for building secure, scalable, and customer-centric platforms.</li>
<li>Hands-on experience with agile development, DevOps culture, and continuous delivery pipelines.</li>
<li>Strong business acumen and the ability to align engineering strategy with overall company objectives.</li>
<li>Comfortable navigating ambiguity and thriving in fast-paced, evolving environments.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Access to competitive Health Insurance (Medical and Dental)</li>
<li>Access to Learning and Development</li>
<li>Flexible time off and generous leave policy</li>
<li>Participation in company equity or incentive plans</li>
<li>A collaborative, inclusive culture where your work is valued and recognised</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Technical leadership, Cloud-based platforms, Agile development, DevOps culture, Continuous delivery pipelines, Modern architectures, Data security, Privacy, Regulatory compliance, Fintech, Online payment solutions, Brick and mortar businesses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Kody</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Kody is a fast-growing Fintech company that provides online payment solutions to brick and mortar businesses.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/6PFrovcG1V6BpwLGog88tp/vp-engineering-in-san-jose-at-kody</Applyto>
      <Location>San Jose, California</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>a416a165-7df</externalid>
      <Title>VP Engineering</Title>
      <Description><![CDATA[<p><strong>VP Engineering at Kody</strong></p>
<p>Kody is scaling rapidly, and we’re looking for an exceptional VP of Engineering to lead our technical organisation through the next phase of growth. As a key member of the executive team, you’ll play a pivotal role in shaping our engineering strategy, culture, and execution. This is a role for someone who thrives in a high velocity, global and purpose driven environment.</p>
<p>As VP of Engineering, you will be responsible for building and leading a high-performing engineering organisation aligned with Kody’s values and mission. You’ll define the technical vision, guide architectural decisions and ensure the delivery of scalable, secure and reliable platforms and products. You’ll work cross-functionally with Product, Design, and Operations to bring our roadmap to life while fostering a strong engineering culture rooted in technical excellence and continuous improvement.</p>
<p><strong>What You’ll Be Doing:</strong></p>
<ul>
<li>Lead the recruitment, development, and retention of a world-class engineering team.</li>
<li>Setting and executing the engineering vision and technical direction in partnership with executive and product leadership.</li>
<li>Defining and delivering engineering strategy that enables scalable and secure growth of Kody’s platform and services.</li>
<li>Drive engineering execution to ensure timely, high-quality delivery across all teams.</li>
<li>Establish and monitor KPIs to track performance, code quality, system stability and deployment efficiency.</li>
<li>Promote a culture of feedback, mentorship and career development through structured reviews and ongoing coaching.</li>
<li>Collaborate closely with Product, Design, and Operations to align on priorities and roadmap planning.</li>
<li>Champion modern development practices including CI/CD, testing, observability, and agile methodologies.</li>
<li>Oversee architectural planning and ensure sound technical decision-making across backend, frontend, and infrastructure.</li>
<li>Manage engineering budgets, headcount planning and resource allocation in alignment with company goals.</li>
<li>Advocate for engineering needs and priorities at the leadership level, balancing business goals and technical feasibility.</li>
<li>Maintain and evolve onboarding processes, documentation, and workflows to support rapid team scaling.</li>
<li>Embed data security, privacy, and regulatory compliance into the Engineering culture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10-15 year experience in engineer leadership roles, ideally within fintech, SaaS, or tech-enabled services.</li>
<li>Proven experience leading engineering teams at high-growth tech companies or scale-ups (ideally in fintech or related industries).</li>
<li>Deep technical expertise across modern architectures and cloud-based platforms.</li>
<li>Strong track record of managing and growing high-performing, distributed engineering teams.</li>
<li>Excellent leadership, communication, and cross-functional collaboration skills.</li>
<li>Passion for building secure, scalable, and customer-centric platforms.</li>
<li>Hands-on experience with agile development, DevOps culture, and continuous delivery pipelines.</li>
<li>Strong business acumen and the ability to align engineering strategy with overall company objectives.</li>
<li>Comfortable navigating ambiguity and thriving in fast-paced, evolving environments.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Access to competitive Health Insurance (Medical and Dental)</li>
<li>Access to Learning and Development</li>
<li>Flexible time off and generous leave policy</li>
<li>Participation in company equity or incentive plans</li>
<li>A collaborative, inclusive culture where your work is valued and recognised</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering leadership, fintech, SaaS, tech-enabled services, modern architectures, cloud-based platforms, agile development, DevOps culture, continuous delivery pipelines, business acumen, communication, cross-functional collaboration, CI/CD, testing, observability, data security, privacy, regulatory compliance</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Kody</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Kody is a fintech company that provides online payment solutions to brick and mortar businesses.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/1GZdQ9GGHNnk6KkrCPVqgP/vp-engineering-in-san-francisco-at-kody</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>f867ca73-2e0</externalid>
      <Title>Lead Data Consultant (H/F) Paris</Title>
      <Description><![CDATA[<p><strong>A leading data company in Paris</strong></p>
<p>Fifty-Five is a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention.</p>
<p>We are looking for a Lead Data Consultant to join our team in Paris. As a Lead Data Consultant, you will be responsible for leading data projects and working closely with our clients to understand their data needs and develop solutions to meet those needs.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Lead data projects from start to finish, including data collection, analysis and activation</li>
<li>Work closely with clients to understand their data needs and develop solutions to meet those needs</li>
<li>Collaborate with our data team to develop and implement data strategies</li>
<li>Analyse data to identify trends and insights that can inform business decisions</li>
<li>Develop and maintain relationships with clients to ensure their data needs are met</li>
<li>Stay up-to-date with the latest data trends and technologies</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3-6 years of experience in data analysis and consulting</li>
<li>Strong understanding of data analysis and statistical techniques</li>
<li>Experience working with large datasets and data visualisation tools</li>
<li>Excellent communication and project management skills</li>
<li>Ability to work independently and as part of a team</li>
<li>Strong analytical and problem-solving skills</li>
<li>Experience working with data platforms such as Google Analytics and Google Cloud Platform</li>
<li>Strong understanding of data privacy and security regulations</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a leading data company in Paris</li>
<li>Collaborative and dynamic work environment</li>
<li>Professional development opportunities</li>
<li>Flexible working hours and remote work options</li>
<li>Access to the latest data tools and technologies</li>
<li>Opportunity to work on a variety of data projects and clients</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and experienced data professional looking for a new challenge, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analysis, data visualisation, data strategy, data privacy, data security, Google Analytics, Google Cloud Platform, data science, machine learning, data engineering, data architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fifty-Five</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fifty-Five is a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention. The company has over 320 experts and is part of The Brandtech Group.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/wPj5jcg35AZgUXYdKWsC6a/lead-data-consultant-(h%2Ff)-paris-in-paris-at-fifty-five</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>015e5c6d-a31</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Why Valvoline Global Operations?</strong></p>
<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>
<p><strong>Job Purpose</strong></p>
<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>
<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>
<ul>
<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>
<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>
<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>
<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>
<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>
<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>
</ul>
<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>
<ul>
<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>
<li>5-7+ years of experience in a data engineering or related role.</li>
<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>
<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>
<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>
<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>
</ul>
<p><strong>Competencies Desired</strong></p>
<ul>
<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>
<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>
<li>SAP BW or SAP HANA experience is a plus.</li>
<li>Experience with BI tools like Power BI or Tableau.</li>
<li>Understanding of data governance frameworks and data security best practices.</li>
<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>
<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>
</ul>
<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>
<ul>
<li>Normal Office environment.</li>
<li>Prolonged periods of computer use and frequent participation in meetings</li>
<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>
</ul>
<ul>
<li>Minimal travel required.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Valvoline Global Operations</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.valvolineglobal.com.png</Employerlogo>
      <Employerdescription>Valvoline Global Operations is a global company that develops future-ready products and provides best-in-class services for the automotive and industrial solutions industry.</Employerdescription>
      <Employerwebsite>https://jobs.valvolineglobal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c29dbc40-9e1</externalid>
      <Title>Advertising Marketing Science Lead</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>OpenAI’s mission is to ensure the responsible and widespread adoption of artificial intelligence. In support of that mission, the Marketing team helps deeply understand customer audiences and market dynamics, influence the development of the right products, build sustainable and customer-aligned monetization models, and drive awareness, adoption, and usage across OpenAI’s products and platform.</p>
<p><strong>About the role</strong></p>
<p>We’re looking for an <strong>Advertising Marketing Science</strong> leader to establish and scale OpenAI’s advertiser-facing reporting, measurement, and attribution credibility. You’ll combine deep measurement expertise with strong judgment and cross-functional leadership to define how advertisers understand performance on OpenAI and how our reporting aligns with their existing measurement frameworks (MTA, incrementality/lift testing, MMM/geo experimentation).</p>
<p>This role will start as a hands-on individual contributor responsible for building the methodological foundations of OpenAI’s advertising measurement system. Over time, you will define the strategy, operating model, and team needed to scale this function globally as advertiser adoption grows.</p>
<p>This role is ideal for someone who enjoys building new capabilities from first principles, can translate complex causal measurement approaches into trusted industry narratives, and is energized by partnering across Product, Engineering, Sales, Partnerships and Legal to build a privacy-first measurement ecosystem.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li><strong>Define OpenAI’s advertiser measurement strategy</strong>, establishing how our reporting aligns with attribution (MTA), incrementality/lift testing, MMM, geo experimentation, and partner measurement frameworks.</li>
</ul>
<ul>
<li><strong>Build the foundation of OpenAI’s Marketing Science function</strong>, initially leading work as an individual contributor while designing the long-term team structure, operating model, and measurement programs.</li>
</ul>
<ul>
<li><strong>Lead advertiser-facing measurement discussions</strong>, representing OpenAI in executive briefings, measurement escalations, and industry conversations while building trust in our methodologies and reporting.</li>
</ul>
<ul>
<li><strong>Develop clear advertiser narratives</strong> that translate causal inference, attribution models, and statistical methodologies into understandable guidance for campaign optimization and investment decisions.</li>
</ul>
<ul>
<li><strong>Design and govern OpenAI’s advertising measurement program</strong>, including standardized experiment patterns (A/B, geo, quasi-experimental), power calculators, diagnostics, and experiment-quality guardrails.</li>
</ul>
<ul>
<li><strong>Build scalable measurement frameworks</strong> that reconcile results across MMM, MTA, and lift testing, helping advertisers triangulate OpenAI performance within their broader marketing measurement systems.</li>
</ul>
<ul>
<li><strong>Establish privacy-centric measurement approaches as needed</strong>, including aggregated measurement, and conversion modeling in partnership with Legal and Privacy teams.</li>
</ul>
<ul>
<li><strong>Translate measurement strategy into product capabilities</strong>, partnering with Product and Engineering to operationalize methodologies into durable measurement tools and reporting infrastructure.</li>
</ul>
<ul>
<li><strong>Shape OpenAI’s external measurement ecosystem</strong>, working with third-party measurement partners, clean-room providers, and industry groups to align standards and reduce friction for advertisers.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have <strong>deep expertise in advertising measurement</strong> including experimentation, incrementality testing, attribution modeling, and econometric approaches such as MMM.</li>
</ul>
<ul>
<li>Have <strong>experience designing and scaling lift or incrementality programs</strong>, including governance, experimentation frameworks, and statistical quality standards.</li>
</ul>
<ul>
<li>Are comfortable acting as a <strong>senior external measurement authority</strong>, confidently leading advertiser conversations, navigating discrepancies, and building trust with sophisticated marketing organizations.</li>
</ul>
<ul>
<li>Can <strong>translate complex statistical concepts into practical decision frameworks</strong> for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Can <strong>build functions from the ground up</strong>, setting strategy while also executing hands-on during early stages of team development.</li>
</ul>
<ul>
<li>Have successfully partnered with <strong>Product and Engineering teams to translate measurement science into scalable product capabilities</strong>.</li>
</ul>
<ul>
<li>Have experience working with <strong>third-party measurement providers or industry standards organizations</strong>.</li>
</ul>
<ul>
<li>Thrive in <strong>fast-paced, high-ambiguity environments</strong> and are comfortable leading cross-company initiatives without direct authority.</li>
</ul>
<ul>
<li>Care deeply about building <strong>trusted, privacy-forward measurement systems</strong> that enable long-term advertiser confidence.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirm</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$284K – $415K</Salaryrange>
      <Skills>Advertising measurement, Experimentation, Incrementality testing, Attribution modeling, Econometric approaches, Statistical quality standards, Measurement frameworks, Data analysis, Data visualization, Communication skills, Leadership skills, Collaboration skills, Data science, Machine learning, Statistics, Mathematics, Computer programming, Data engineering, Cloud computing, Big data, Data governance, Data security</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/5547d275-f123-46e1-8695-71fd79a05724</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>aa015612-5ff</externalid>
      <Title>Product &amp; Solutions Lead, Safety and Security</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Product &amp; Solutions Lead, Safety and Security</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Intelligence &amp; Investigations</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$288K – $425K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Intelligence &amp; Investigations (I2) team detects and disrupts abuse and strategic risks so people can use AI safely. We translate real-world signals, investigations, and external threat intelligence into practical mitigations, operating guidance, and partner-ready support that improves safety outcomes across the AI ecosystem.</p>
<p><strong>About the Role</strong></p>
<p>As a Product &amp; Solutions Lead focused on safety and security, you will build and operate 0–1 products, services, and technical solution packages that help developers and public institutions move from experimentation to durable, trusted outcomes—while maintaining public safety, transparency, and respect for privacy and rights.</p>
<p>This role balances two modes of delivery:</p>
<ol>
<li>Bespoke products and technical solutions for strategic internal and external partners, and</li>
</ol>
<ol>
<li>Scalable product and solution packages that can be reused broadly across partners and deployments.</li>
</ol>
<p>Training is a component of scale, but not the center of gravity. You will also ship reference implementations, playbooks, evaluation kits, and repeatable operating models that partners can adopt and operate.</p>
<p>You will work directly with engineers and a multidisciplinary group of safety and geopolitical analysts, and data and quantitative scientists to convert complex, evolving challenges into solutions that teams can adopt in high-stakes environments.</p>
<p>This role is based in San Francisco, CA (hybrid, 3 days/week). Relocation support is available.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Own the 0–1 roadmap for safety and security solution offerings: define the target users, problem statements, tools, operating models, success metrics, and the set of reusable deliverables we ship.</li>
</ul>
<ul>
<li>Design and ship bespoke technical solutions for priority partners (internal and external), then abstract what works into reusable patterns and toolkits.</li>
</ul>
<ul>
<li>Build partner-ready technical artifacts: solution blueprints, reference architectures, evaluation and monitoring guidance, incident/response playbooks, and deployment checklists.</li>
</ul>
<ul>
<li>Package open-source and proprietary capabilities into adoption-ready solutions (e.g., reference implementations, configuration patterns, validated workflows).</li>
</ul>
<ul>
<li>Maintain a consistent delivery model across engagements: intake, scoping, governance alignment, execution cadence, and retrospectives that improve the offering over time.</li>
</ul>
<ul>
<li>Translate evolving threats into actionable guidance and updates for solution packages (e.g., scams/fraud patterns, cyber-enabled threats, ecosystem abuse trends).</li>
</ul>
<ul>
<li>Develop lightweight enablement components as needed: targeted technical modules, hands-on labs, and readiness assessments that accelerate adoption of the solutions.</li>
</ul>
<ul>
<li>Define and instrument impact measurement: adoption milestones, readiness indicators, reliability and safety posture improvements, and partner satisfaction with outputs.</li>
</ul>
<ul>
<li>Partner closely across engineering, safety, geopolitical analysis, and quantitative teams to ensure solutions are technically credible, threat-informed, and measurable.</li>
</ul>
<ul>
<li>Communicate crisply and decision-readily to internal and external stakeholders: progress, trade-offs, risks, and recommendations.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years in product, technical program leadership, solutions, or platform operations, especially in safety, security, risk, integrity, or enterprise/public-sector contexts.</li>
</ul>
<ul>
<li>Have built 0–1 solution offerings (product plus services or productized services): taking ambiguous needs, shipping something concrete, then scaling it into a repeatable model.</li>
</ul>
<ul>
<li>Have a builder’s mindset: comfortable incubating early-stage ideas, testing them with partners, and evolving them into durable, repeatable safety and security solutions.</li>
</ul>
<ul>
<li>Can go deep with engineers and still produce partner-ready artifacts that are clear</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$288K – $425K</Salaryrange>
      <Skills>product leadership, technical program leadership, solutions, platform operations, safety, security, risk, integrity, enterprise/public-sector contexts, product development, solution development, technical writing, communication, project management, team leadership, collaboration, problem-solving, analytical skills, data analysis, data visualization, machine learning, artificial intelligence, cybersecurity, threat intelligence, incident response, compliance, regulatory affairs, cloud computing, containerization, DevOps, agile development, scrum, kanban, continuous integration, continuous deployment, continuous testing, test automation, security testing, penetration testing, vulnerability assessment, compliance testing, regulatory testing, data protection, information security, cybersecurity frameworks, risk management, compliance management, regulatory compliance, data governance, information governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog, data governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence in a way that benefits humanity. It was founded in 2015 and has since grown to become one of the leading AI research and development companies in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c664cc09-d996-450c-8683-ad591ac27c11</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>b04bd171-7c3</externalid>
      <Title>Full Stack Software Engineer, ChatGPT Partnerships</Title>
      <Description><![CDATA[<p><strong>Full Stack Software Engineer, ChatGPT Partnerships</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$185K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The ChatGPT team operates at the intersection of research, engineering, product, and design to bring OpenAI’s technology to a global audience.</p>
<p>Within ChatGPT, the <strong>Growth Partnerships</strong> team is dedicated to expanding distribution, unlocking new user acquisition channels, and building high-leverage integrations that deliver ChatGPT to users where they are. Working closely with external partners and internal platform teams, we design product experiences, APIs, and growth surfaces that scale adoption while upholding trust and safety.</p>
<p>Our work is multidisciplinary—merging product, engineering, and business impact to transform partnerships into sustainable growth engines for ChatGPT.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking an experienced <strong>Full Stack Engineer</strong> to join the ChatGPT Growth Partnerships team and help establish the technical foundation that underpins partner-led growth. You will work on end-to-end product experiences—including frontend applications, backend services, APIs, experimentation, and data—that facilitate seamless integrations, onboarding, activation, and monetization through partners.</p>
<p>This high-impact role is ideal for engineers who thrive in fast-paced, ambiguous settings, can move from concept to launch, make sound product and technical judgments, and deliver quickly while maintaining robust engineering standards.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and own full-stack product experiences supporting partner integrations, onboarding flows, activation funnels, and growth surfaces.</li>
</ul>
<ul>
<li>Design and develop backend services and APIs for scalable, secure partner experiences.</li>
</ul>
<ul>
<li>Collaborate with product, partnerships, design, data science, and research teams to translate strategy into shipped product.</li>
</ul>
<ul>
<li>Lead experimentation initiatives (A/B tests, metrics, instrumentation) to understand drivers of adoption, retention, and value through partnerships.</li>
</ul>
<ul>
<li>Identify leverage points where small technical innovations can unlock significant growth impact.</li>
</ul>
<ul>
<li>Establish best practices for building extensible, partner-friendly systems at scale.</li>
</ul>
<ul>
<li>Contribute to a culture of ownership, clarity, inclusiveness, and thoughtful debate within engineering.</li>
</ul>
<p><strong>You Might Thrive in This Role If You</strong></p>
<ul>
<li>Have delivered full-stack features on the web that drive user acquisition, activation, or monetization (e.g., onboarding, integrations, dashboards, purchase flows).</li>
</ul>
<ul>
<li>Are comfortable with frontend and backend development, including API and service design, as well as data flows.</li>
</ul>
<ul>
<li>Think with a system-level perspective and focus on scalability and long-term robustness.</li>
</ul>
<ul>
<li>Are highly analytical, experienced in experiment design, and able to connect technical work to business outcomes.</li>
</ul>
<ul>
<li>Enjoy navigating ambiguity and structuring new problem areas.</li>
</ul>
<ul>
<li>Possess strong product intuition and prioritize user- and developer-friendly experiences.</li>
</ul>
<ul>
<li>Are motivated by impact and inspired to help shape how partnerships fuel ChatGPT’s growth.</li>
</ul>
<p><strong>Location</strong></p>
<p>San Francisco, New York, or Seattle</p>
<p><strong>Work Type</strong></p>
<p>Full-time</p>
<p><strong>Join us to help grow the ChatGPT partner ecosystem and reach millions of users through thoughtful engineering and product leadership.</strong></p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$185K – $385K • Offers Equity</Salaryrange>
      <Skills>Full-stack development, Frontend development, Backend development, API design, Service design, Data flows, Experimentation, A/B testing, Metrics, Instrumentation, Scalability, Long-term robustness, Analytical skills, Experiment design, Business outcomes, Product intuition, User- and developer-friendly experiences, Cloud computing, Containerization, DevOps, Agile development, Scrum, Kanban, Continuous integration, Continuous deployment, Continuous testing, Test-driven development, Behavior-driven development, API security, Data security, Cloud security, Compliance, Regulatory requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/50626871-6bbf-4d8f-a534-176f929f1f37</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e2dd1bdf-0b0</externalid>
      <Title>Member of Technical Staff - Data Infra - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Infra - MAI Superintelligence Team at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising haptic entertainment technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the cinema and simulation markets.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for outstanding individuals excited about contributing to the next generation of systems that will transform the field. In particular, we are looking for candidates who: Are passionate about the role of data in large-scale AI model training Will thrive in a highly collaborative, fast-paced environment Have a high degree of expertise and pay close attention to details Demonstrate a proactive attitude and enthusiasm for exploring new methods and technologies Effectively manage multiple responsibilities and can adjust to shifting priorities.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and develop data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video).</li>
<li>Own and maintain critical data infrastructures, including spark, ray, vector databases, and others.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 8+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years experience with data governance, data compliance and/or data security.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>A high degree of expertise and pay close attention to details.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $163,000 - $296,400 per year.</li>
<li>Benefits and other compensation.</li>
<li>Flexible work arrangements, including remote and hybrid options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$163,000 - $296,400 per year</Salaryrange>
      <Skills>data governance, data compliance, data security, data engineering, data science, data governance, data compliance, data security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that aims to empower every person and every organization on the planet to achieve more. They are on a mission to create the largest and most advanced multimodal dataset in the world, which will power the training of the world&apos;s most capable AI frontier models.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infra-mai-superintelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>18d14369-6b5</externalid>
      <Title>Member of Technical Staff - Data Infra - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Infra - MAI Superintelligence Team at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and data science markets.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for outstanding individuals excited about contributing to the next generation of systems that will transform the field. In particular, we are looking for candidates who: Are passionate about the role of data in large-scale AI model training Will thrive in a highly collaborative, fast-paced environment Have a high degree of expertise and pay close attention to details Demonstrate a proactive attitude and enthusiasm for exploring new methods and technologies Effectively manage multiple responsibilities and can adjust to shifting priorities.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and develop data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video).</li>
<li>Own and maintain critical data infrastructures, including spark, ray, vector databases, and others.</li>
<li>Build and maintain cutting-edge infrastructure that can store and process the petabytes of data needed to power models.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 8+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years experience with data governance, data compliance and/or data security.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Embody our culture and values.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Data Engineering IC6 – The typical base pay range for this role across the U.S. is USD $163,000 – $296,400 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $220,800 – $331,200 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>data governance, data compliance, data security, data engineering, data science, data governance, data compliance, data security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are on a mission to create the largest and most advanced multimodal dataset in the world, which will power the training of the world&apos;s most capable AI frontier models. Microsoft AI is a startup-like team inside Microsoft, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infra-mai-superintelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>e330a898-308</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>
<ul>
<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>
<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>
<li>You will implement data models aligned with KPI monitoring requirements.</li>
<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>
<li>You will implement audit and logging mechanisms for sensitive data.</li>
<li>You will document data flows, architecture, and operational procedures.</li>
<li>You will collaborate with international project teams</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>
<li>You have 3+ years of proven experience in data engineering projects.</li>
<li>You have strong skills in Python, SQL, and PySpark.</li>
<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>
<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>
<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>
<li>You have a good understanding of data security and compliance for sensitive information.</li>
<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>
<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, AWS services, data modeling, KPI reporting, data security, DevOps practices, SAP HANA, Java, IoT in the automotive domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions at its two locations in Cluj-Napoca and Timisoara, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18980</Applyto>
      <Location>Timisoara</Location>
      <Country></Country>
      <Postedate>2025-12-08</Postedate>
    </job>
    <job>
      <externalid>a0ca0eaa-e37</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>At Porsche Engineering Romania, we drive innovation in mobility systems through advanced data solutions. We are looking for a Data Engineer to design and optimize data pipelines, integrate IoT and telemetry data, and ensure compliance with performance KPIs.</p>
<ul>
<li>Design and implement ETL/ELT processes for mobility data streams using AWS services.</li>
<li>You will integrate data from multiple sources (IoT, telemetry, infrastructure systems).</li>
<li>You will implement data models aligned with KPI monitoring requirements.</li>
<li>You will ensure data accuracy, consistency, and compliance with security standards.</li>
<li>You will implement audit and logging mechanisms for sensitive data.</li>
<li>You will document data flows, architecture, and operational procedures.</li>
<li>You will collaborate with international project teams</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Information Technology or an equivalent education.</li>
<li>You have 3+ years of proven experience in data engineering projects.</li>
<li>You have strong skills in Python, SQL, and PySpark.</li>
<li>You have experience with data modeling and KPI reporting using tools like Power BI, Tableau, or Qlik.</li>
<li>You have hands-on knowledge of AWS services (S3, Glue, Lambda, Flink, Kinesis, CloudWatch, Step Functions, Athena, ECS).</li>
<li>You are familiar with monitoring frameworks (OpenTelemetry, NewRelic).</li>
<li>You have a good understanding of data security and compliance for sensitive information.</li>
<li>You have knowledge of DevOps practices for data solutions (Terraform, CI/CD, monitoring).</li>
<li>Experience with SAP HANA, Java, and IoT in the automotive domain (e.g., ECU data) is considered a plus.</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role keeps a world-championship-winning F1 team running. When equipment fails, races can be lost, so your work directly impacts performance. You&#39;ll develop deep expertise in high-spec facilities and have clear progression into senior facilities management roles. The F1 environment means you&#39;ll work with cutting-edge building systems and learn from the best in the industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, data modeling, KPI reporting, AWS services, monitoring frameworks, data security, DevOps practices, SAP HANA, Java, IoT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions at its two locations in Cluj-Napoca and Timisoara, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18979</Applyto>
      <Location>Cluj</Location>
      <Country></Country>
      <Postedate>2025-12-08</Postedate>
    </job>
  </jobs>
</source>