<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1fa6d45d-1b7</externalid>
      <Title>Senior Software Engineer, United Kingdom</Title>
      <Description><![CDATA[<p>We are hiring Software Engineers to accelerate our mission. At KoBold, software engineers have the unique opportunity to embed directly with their users and learn the ins and outs of mineral exploration and geology while developing state-of-the-art technology solutions.\n\nUnlike traditional software engineering roles, we don&#39;t simply ship code and passively wait for feedback about its utility: our userbase includes our colleagues... and ourselves!\n\nWhile there are real technical challenges in making mineral exploration data broadly searchable and accessible to both humans and machines, we believe that solving these technical challenges cannot be done without &quot;getting our hands dirty&quot; – sometimes literally! – by embedding directly with the exploration teams and even occasionally (~once a year) joining our colleagues in the field, be it in Zambia, Canada, or Arizona, to experience the impact of our software in real time.\n\nAs a Software Engineer on the Data Systems Engineering team at KoBold, your main role will be to enable systematic exploration and materially improve exploration success rates by making mineral exploration data broadly accessible to humans and machines.\n\nPast projects have included SIP (the Structured Ingest Pipeline), DataKit generation (producing curated sets of data on demand), and RAG (Retrieval Augmentation Generation, utilizing natural language processing on unstructured data).\n\nOur tech stack is primarily python and includes Django, React, AWS, and additional technologies like Retool and Prefect.\n\nYour work will empower KoBold to unlock invaluable insights and streamline intricate scientific processes.\n\nCollaborating with our exceptional team of data scientists, geologists, and other software engineers, you will have the opportunity to tackle complex problems head-on and collectively pave the way for the discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt.\n\nTogether we can shape the future of mineral exploration and contribute to building a sustainable world.\n\nThis role will be responsible for:\n\nDeep engagement with exploration geologists and data scientists, continual learning about mineral exploration, and tailoring technology development to the needs of exploration project scientists\n\nBuilding data pipelines and tooling for deriving advanced human and machine insights from exploration data, often leading a small group of software engineers to successful delivery\n\nDeveloping expertise in KoBold&#39;s Data Systems and deeply understanding how they impact exploration\n\nEnd-to-end ownership of projects from design to implementation and testing to continued engagement with colleagues on exploration teams using your solutions\n\nResponding well to design and code feedback, also providing feedback to teammates\n\nOperationally managing the team&#39;s services and assisting scientific colleagues with our tooling\n\nQualifications:\n\n4+ years of software engineering experience, ideally building production cloud data systems\n\nProficiency with Python\n\nAbility to write production-quality code that is correct, readable, well-tested, scalable and extensible\n\nSkilled in large-scale system design\n\nA track record of taking ownership from definition of the problem and delivering projects with demonstrated impact in an iterative manner\n\nIntellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.\n\nEnjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.\n\nAbility to explain technical problems to and collaborate on solutions with domain experts who are not software developers.\n\nA strong communicator who enjoys working with colleagues across the company.\n\nExcitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.\n\nKeen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.\n\nNice to Haves:\n\nExperience with modern frontend frameworks such as React\n\nExperience with geospatial data and building map-based experiences\n\nFamiliarity with containerization and container orchestration platforms, such as Docker, AWS ECS, Kubernetes, etc.\n\nFormal education or job exposure to natural sciences</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $210,000 USD</Salaryrange>
      <Skills>Python, Django, React, AWS, Retool, Prefect, Geospatial data, Containerization, Container orchestration, Modern frontend frameworks, Geospatial data and map-based experiences, Containerization and container orchestration platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold</Employername>
      <Employerlogo>https://logos.yubhub.co/kobold.com.png</Employerlogo>
      <Employerdescription>KoBold is a privately held mineral exploration company and technology developer, with a portfolio of over 60 projects and a team of data scientists, software engineers, and exploration geologists.</Employerdescription>
      <Employerwebsite>https://www.kobold.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4678367005</Applyto>
      <Location>Remote, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2cf203a5-5c5</externalid>
      <Title>Platform Engineer, Document Intelligence</Title>
      <Description><![CDATA[<p>About Hebbia</p>
<hr>
<p>The AI platform for investors and bankers that generates alpha and drives upside.</p>
<p>Founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz, Hebbia powers investment decisions for BlackRock, KKR, Carlyle, Centerview, and 40% of the world’s largest asset managers. Our flagship product, Matrix, delivers industry-leading accuracy, speed, and transparency in AI-driven analysis. It is trusted to help manage over $30 trillion in assets globally.</p>
<p>We deliver the intelligence that gives finance professionals a definitive edge. Our AI uncovers signals no human could see, surfaces hidden opportunities, and accelerates decisions with unmatched speed and conviction. We do not just streamline workflows. We transform how capital is deployed, how risk is managed, and how value is created across markets.</p>
<p>Hebbia is not a tool. Hebbia is the competitive advantage that drives performance, alpha, and market leadership.</p>
<hr>
<p>The Team</p>
<hr>
<p>The Document Intelligence team at Hebbia builds cutting-edge AI solutions that transform how users discover and interact with billions of private and public documents. Our products, including the Hebbia’s Browse application, enable intelligent document exploration, powerful search capabilities, and deep insights extraction. We focus on developing advanced data ingestion and search technologies that deliver intuitive, explainable, and highly responsive experiences. Working closely with customers, our team continuously iterates to address real-world challenges and drive impactful, data-driven decisions. Our goal is to empower users by seamlessly turning vast and complex document repositories into actionable intelligence.</p>
<hr>
<p>The Role</p>
<hr>
<p>Platform engineering at Hebbia is about excellent, scalable enablement. You are responsible for the core distributed systems that power billions of tokens across millions of dollars of AUM. You will be responsible for deploying efficient systems and building software tightly coupled with state-of-the-art infrastructure/system design. Hebbia’s edge is built on operating on the edge of the tokenomics curve and you will serve as a key contributor in this area. We value engineers who think on their feet, innovate and can solve for exponential scale.</p>
<hr>
<p>Responsibilities</p>
<hr>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<hr>
<p>Who You Are</p>
<hr>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field. A strong academic background with coursework in data structures, algorithms, and software development is preferred.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on distributed systems and platform engineering.</li>
<li>Proficiency in building backend and distributed systems using technologies such as Python, Java, or Go.</li>
<li>Deep understanding of scalable system design, performance optimization, and resilience engineering.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Knowledge of workflow orchestration and execution platforms like Airflow, Temporal or Prefect.</li>
<li>Proven experience enabling observability patterns.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams. Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<hr>
<p>Bonuses:</p>
<ul>
<li>Experience building distributed systems leveraging technologies such as etcd or Apache Zookeeper.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<hr>
<p>Compensation</p>
<hr>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<hr>
<p>Life @ Hebbia</p>
<hr>
<ul>
<li>PTO: Unlimited</li>
<li>Insurance: Medical + Dental + Vision + 401K</li>
<li>Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late</li>
<li>Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent</li>
<li>Fertility benefits: $15k lifetime benefit</li>
<li>New hire equity grant: competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>backend and distributed systems, Python, Java, Go, scalable system design, performance optimization, resilience engineering, cloud platforms, AWS, Kafka, ElasticSearch, PostgreSQL, Redis, workflow orchestration and execution platforms, Airflow, Temporal, Prefect, observability patterns, etcd, Apache Zookeeper, AI products, Cursor, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz, and powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584750005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d92bf714-0ed</externalid>
      <Title>Python Financial Model Engineer, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are looking for a self-motivated software engineer to onboard models to our platform. Collaborate with modellers on the implementation and deployment of financial risk models. Understand client requirements and translate them into software engineering tasks.</p>
<p>Our team</p>
<ul>
<li>Is passionate about technology and solving complex problems.</li>
<li>Develops in Python, working with technologies like Pandas, Apache Arrow, Snowflake, Prefect, Docker, and Azure DevOps.</li>
<li>Consists of technologists that unlock constant innovation.</li>
<li>Constantly challenges the technology status quo and looks for ways to improve the platform.</li>
</ul>
<p>Key Responsibilities</p>
<p>We expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate skills or experience with the following (not in order of priority):</p>
<ul>
<li>Quickly learn the platform and act as a subject matter expert towards modelling teams and product analysts.</li>
<li>Work with modellers and product analysts to understand the business and their requirements. Help implement those on our platform using engineering best practices.</li>
<li>Facilitate technical design and code review sessions to ensure software meets functional and compatibility requirements, as well as high quality standards.</li>
<li>Stay abreast of the latest developments in machine learning, quantitative finance, and technology to incorporate innovative solutions into our platform.</li>
<li>Enhance the performance of existing models, ensuring they operate efficiently at scale.</li>
<li>Implementation and maintenance of a standard data / technology deployment workflow to ensure that all deliverables/enhancements are delivered in a disciplined and robust manner.</li>
<li>Ensure operational readiness of the product and meet customer commitments with regards to incident SLAs.</li>
</ul>
<p>Skillset</p>
<ul>
<li>Strong experience (3+ years) in Python is crucial</li>
<li>Bachelor’s (BSc) or higher degree in Computer Science or related field</li>
<li>Experience with Pandas, Apache Arrow, Snowflake, (Prefect is a plus)</li>
<li>Good understanding of Object-Oriented Design principles</li>
<li>Fluency with AI coding tools and the use of LLM in everyday development</li>
<li>Good understanding of fundamental Algorithms and Data Structures</li>
<li>Knowledge of Azure DevOps and git, CI/CD</li>
<li>Good understanding of unit tests, integration and regression tests, and their importance</li>
<li>An aptitude for designing data models and pipelines is a plus</li>
<li>Ability to understand advanced mathematical and statistical methods and concepts</li>
<li>Fluency in reading, writing and speaking English</li>
</ul>
<p>Personal Qualities</p>
<ul>
<li>Team player</li>
<li>Problem-solving skills</li>
<li>Critical and analytical thinking</li>
<li>Technical curiosity</li>
<li>Adaptable</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Pandas, Apache Arrow, Snowflake, Prefect, Docker, Azure DevOps, Object-Oriented Design principles, AI coding tools, LLM, fundamental Algorithms and Data Structures, Azure DevOps and git, CI/CD, unit tests, integration and regression tests, advanced mathematical and statistical methods and concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management firm that provides a range of investment solutions to institutional, intermediary and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/fj1ayC3FJpaeEtw176Tee2/python-financial-model-engineer%2C-associate-in-budapest-at-blackrock</Applyto>
      <Location>Budapest, Hungary</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>10d04dca-482</externalid>
      <Title>Python Financial Engineering Platform Lead, Vice President</Title>
      <Description><![CDATA[<p>About this role</p>
<p>BlackRock is seeking a self-motivated lead software engineer to spearhead the development of its Python financial engineering platform. The platform enables research, implementation, delivery, and execution of financial risk models for internal partners and external clients.</p>
<p>What will you be doing?</p>
<ul>
<li>Lead platform development supporting both financial engineers and researchers.</li>
<li>Facilitate technical design and code review sessions to ensure software meets functional and compatibility requirements, as well as high quality standards.</li>
<li>Build widely used and reliable fundamental components as part of the platform, distributed as Python libraries.</li>
<li>Stay abreast of the latest developments in machine learning, quantitative finance, and technology to incorporate innovative solutions into our platform.</li>
</ul>
<p>Key Responsibilities</p>
<ul>
<li>Quickly learn the platform and act as a subject matter expert towards modelling teams.</li>
<li>Build high quality software that improves the user experience of the downstream modeller and developer.</li>
<li>Enhance the performance of existing models, ensuring they operate efficiently at scale.</li>
<li>Implementation and maintenance of a standard data / technology deployment workflow to ensure that all deliverables/enhancements are delivered in a disciplined and robust manner.</li>
</ul>
<p>Skillset</p>
<ul>
<li>Strong experience (5+ years) in Python is crucial.</li>
<li>Bachelor&#39;s (BSc) or higher degree in Computer Science or equivalent field.</li>
<li>Experience with Pandas, Apache Arrow, Snowflake, (Prefect is a plus).</li>
<li>Good understanding of Object-Oriented Design principles.</li>
<li>Good understanding of fundamental Algorithms and Data Structures.</li>
<li>Knowledge of Azure DevOps and git, CI/CD.</li>
<li>Good understanding of unit tests, integration and regression tests, and their importance.</li>
</ul>
<p>Personal Qualities</p>
<ul>
<li>Team player.</li>
<li>Problem-solving skills.</li>
<li>Critical and analytical thinking.</li>
<li>Technical curiosity.</li>
<li>Adaptable.</li>
</ul>
<p>Our benefits</p>
<ul>
<li>Retirement investment and tools designed to help you in building a sound financial future.</li>
<li>Access to education reimbursement.</li>
<li>Comprehensive resources to support your physical health and emotional well-being.</li>
<li>Family support programs.</li>
<li>Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</li>
</ul>
<p>Our hybrid work model</p>
<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Pandas, Apache Arrow, Snowflake, Azure DevOps, git, CI/CD, unit tests, integration and regression tests, Prefect</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management firm that provides asset management, risk management, and advisory services to institutional, intermediary, and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2mb7rZ3FNbXbRf6dpxMmaf/python-financial-engineering-platform-lead%2C-vice-president-in-budapest-at-blackrock</Applyto>
      <Location>Budapest, Hungary</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c873a489-0dc</externalid>
      <Title>Data Engineer, Analytics</Title>
      <Description><![CDATA[<p><strong>Data Engineer, Analytics</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to consumers and businesses.</p>
<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth.</p>
<p><strong>About the role</strong></p>
<p>We&#39;re seeking a Data Engineer to take the lead in building our data pipelines and core tables for OpenAI. These pipelines are crucial for powering analyses, safety systems that guide business decisions, product growth, and prevent bad actors. If you&#39;re passionate about working with data and are eager to create solutions with significant impact, we&#39;d love to hear from you. This role also provides the opportunity to collaborate closely with the researchers behind ChatGPT and help them train new models to deliver to users. As we continue our rapid growth, we value data-driven insights, and your contributions will play a pivotal role in our trajectory. Join us in shaping the future of OpenAI!</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse.</li>
</ul>
<ul>
<li>Develop canonical datasets to track key product metrics including user growth, engagement, and revenue.</li>
</ul>
<ul>
<li>Work collaboratively with various teams, including, Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions.</li>
</ul>
<ul>
<li>Implement robust and fault-tolerant systems for data ingestion and processing.</li>
</ul>
<ul>
<li>Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear.</li>
</ul>
<ul>
<li>Ensure the security, integrity, and compliance of data according to industry and company standards.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).</li>
</ul>
<ul>
<li>Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.</li>
</ul>
<ul>
<li>Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).</li>
</ul>
<ul>
<li>Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.</li>
</ul>
<ul>
<li>Solid understanding of Spark and ability to write, debug and optimize Spark code.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Python, Scala, Java, Hadoop, Flink, HDFS, S3, Airflow, Dagster, Prefect, Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/fc5bbc77-a30c-4e7a-9acc-8a2e748545b4</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4a7597fd-d7a</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential.</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global company that creates cutting-edge products and experiences that define the ultimate gameplay. They are guided by their mission &apos;For Gamers. By Gamers.&apos; and are relentlessly pushing boundaries and leading the charge in AI for gaming, shaping the future of the industry.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-01-01</Postedate>
    </job>
    <job>
      <externalid>e5eb908e-6f9</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Vector/Graph;Neo4j, data engineering, AI/ML-driven data architectures, Python, SQL, Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>