<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>59421d7b-b28</externalid>
      <Title>Full Stack Engineer - Real-Time Trading</Title>
      <Description><![CDATA[<p>We are seeking a Full Stack Engineer to join our EQ Real-Time P&amp;L &amp; Risk team. This team is responsible for designing, developing, and supporting technology platforms that enable our businesses to view, evaluate, hedge, and trade live positions, P&amp;L, and risk.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with application development teams, technology management, and the business to design, prototype, and implement next-generation web UIs and mobile apps.</li>
<li>Develop, maintain, and support existing Java Client UI used by a quarter of the firm.</li>
<li>Contribute to the application development and architecture of highly scalable real-time UIs.</li>
</ul>
<p>Qualifications, Skills, and Requirements:</p>
<ul>
<li>5+ years of full-stack development experience, preferably within a financial services firm supporting real-time UIs.</li>
<li>Expertise with Core Java and Spring.</li>
<li>Excellent grasp of data structures and algorithms and the ability to learn and adopt new technologies quickly.</li>
<li>Familiarity with database technologies – Advanced SQL, NoSQL, Time-series databases (KDB).</li>
<li>Experience with event-driven architecture using message bus and caching technologies like Solace, Kafka, Pulsar, Memcached, Redis.</li>
<li>Experience working with various monitoring tools like Datadog, ELK stack.</li>
<li>A strong interest in financial markets and a desire to work directly with investment professionals.</li>
<li>A good team player with a strong willingness to participate and help others.</li>
<li>Drive to learn and experiment.</li>
</ul>
<p>Nice-to-have:</p>
<ul>
<li>Proficiency with Angular UI is preferred; React will also be considered.</li>
<li>Familiarity with equities and equity derivatives within a real-time electronic trading environment is preferred.</li>
<li>Experience with KDB+ q or C/C++.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Core Java, Spring, Advanced SQL, NoSQL, Time-series databases (KDB), Solace, Kafka, Pulsar, Memcached, Redis, Datadog, ELK stack, Angular UI, React, equities, equity derivatives, KDB+ q, C/C++</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology provider for the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954774219</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5c70414d-4e6</externalid>
      <Title>Full‑Stack data engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly self-sufficient, motivated engineer with strong full-stack data engineering skills to join our team. This is a remote/offshore role that requires autonomy, excellent communication, and the ability to deliver high-quality work with limited supervision while collaborating with a predominantly US-based team.</p>
<p>You will build reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting,working end-to-end from data ingestion and transformation through to UI. Our Python-based data platform is undergoing a major evolution toward a modern, cloud-native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment.</p>
<p>This role will be pivotal in independently designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives,delivering clean, reliable, and well-modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely (primarily via remote channels) with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.</p>
<p>Key Responsibilities</p>
<ul>
<li>Remote collaboration &amp; communication: Operate effectively as an offshore member of a distributed team, proactively communicating status, risks, and blockers across time zones and coordinating overlap with US working hours as needed.</li>
</ul>
<ul>
<li>Full-stack data engineering: Build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites, delivering production-grade solutions with minimal hand-holding.</li>
</ul>
<ul>
<li>Autonomous delivery &amp; ownership: Take end-to-end ownership of features and projects,clarifying requirements, breaking work into milestones, estimating timelines, and delivering high-quality, well-documented solutions.</li>
</ul>
<ul>
<li>Specification and design: Translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable technical specifications and design documents.</li>
</ul>
<ul>
<li>Code quality: Write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.</li>
</ul>
<ul>
<li>Continuous improvement: Contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements, continuously suggesting better tools, patterns, and automation.</li>
</ul>
<p>Required Skills &amp; Experience</p>
<ul>
<li>Professional experience: 5+ years in software engineering, with a full-stack background building complex, scalable data-engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.</li>
</ul>
<ul>
<li>Modern data engineering: Strong experience with medallion data architecture patterns using data warehouse technologies (e.g., Snowflake), data transformation tooling (e.g., dbt), BI tooling, and NoSQL data marts (e.g., Elasticsearch/OpenSearch).</li>
</ul>
<ul>
<li>Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.</li>
</ul>
<ul>
<li>Remote work &amp; autonomy: Proven track record working in a remote or distributed environment, demonstrating self-motivation, reliable execution, and the ability to make sound technical decisions independently.</li>
</ul>
<ul>
<li>Agile methodology: Working knowledge of Agile development practices and workflows (e.g., sprint planning, stand-ups, retrospectives) in a distributed team setting.</li>
</ul>
<ul>
<li>Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.</li>
</ul>
<p>Preferred Skills &amp; Experience</p>
<ul>
<li>Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.</li>
</ul>
<ul>
<li>Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.</li>
</ul>
<ul>
<li>Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.</li>
</ul>
<ul>
<li>Front-end expertise: Experience building user interfaces with Angular or a modern UI stack.</li>
</ul>
<ul>
<li>Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Snowflake, dbt, AWS, Terraform, modern UI technologies, data warehouse technology, SQL, unit testing, CI/CD automation, quality assurance processes, machine learning, AI, large language models, agentic frameworks, ELK stack, search and analytics solutions, cloud expertise, AWS cloud services, SageMaker, CI/CD tooling, front-end expertise, Angular, financial domain knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides risk management solutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955321460</Applyto>
      <Location>Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>60aae9e8-e8b</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>
<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>
<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>
<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>
<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400374002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3f16d353-491</externalid>
      <Title>Software Engineer, Infrastructure Reliability</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Infrastructure Reliability</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$255K – $385K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We’re hiring Software Engineers to join our Applied Infrastructure organization, and more specifically for our Database Systems and Online Storage teams. These teams operate with a high degree of autonomy and are deeply collaborative, with a shared mandate to raise the bar on safety, reliability, and velocity across OpenAI.</p>
<p><strong>About the Role</strong></p>
<p>You’ll be at the heart of scaling and hardening the infrastructure that powers some of the most widely used AI systems in the world. You’ll help ensure our systems are highly reliable, observable, performant, and secure—so researchers can iterate quickly, and products like ChatGPT and the OpenAI API can serve millions of users safely and effectively.</p>
<p>This is a hands-on, high-leverage role for engineers who thrive on ownership, love solving deep technical problems across the stack, and want to work on systems that support cutting-edge research and deploy at global scale. You’ll play a key part in shaping technical direction, proactively improving system resilience, and collaborating closely with infra, product, and research teams to turn complex infrastructure into reliable platforms.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Design, build, and operate reliable and performant systems used across engineering.</li>
</ul>
<ul>
<li>Identify and fix performance bottlenecks and inefficiencies, ensuring our infrastructure can scale to the next order of magnitude.</li>
</ul>
<ul>
<li>Dig deep to resolve complex issues.</li>
</ul>
<ul>
<li>Continuously improve automation to reduce manual work. Improve internal tooling and our developer experience.</li>
</ul>
<ul>
<li>Contribute to incident response, postmortems, and the development of best practices around system reliability and scalability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a deep understanding of distributed systems principles and a proven track record in building and operating scalable and reliable systems.</li>
</ul>
<ul>
<li>Have a keen eye for performance and optimization. You know how to squeeze the most performance out of complex, globally-distributed systems.</li>
</ul>
<ul>
<li>Have experience operating orchestration systems such as Kubernetes at scale and building abstractions over cloud platforms</li>
</ul>
<ul>
<li>Are comfortable working in Linux environments, and with tools like Kubernetes, Terraform, CI/CD pipelines, and modern observability stacks.</li>
</ul>
<ul>
<li>Are experienced in collaborating with cross-functional teams to ensure that reliability and scalability are considered in the design and development of new features and services.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapid change.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>4+ years of relevant industry experience, with 2+ years leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>A passion for distributed systems at scale with a focus on reliability, scalability, security, and continuous improvement.</li>
</ul>
<ul>
<li>Proven experience as an reliability engineer, production engineer, or a similar role in a fast-paced, rapidly scaling company.</li>
</ul>
<ul>
<li>Strong proficiency in cloud infrastructure (like AWS, GCP, Azure) and IaC tools such as Terraform. Proficiency in programming / scripting languages.</li>
</ul>
<ul>
<li>Experience with containerization technologies and container orchestration platforms like Kubernetes.</li>
</ul>
<ul>
<li>Experience with observability tools such as Datadog, Prometheus, Grafana, Splunk and ELK stack.</li>
</ul>
<ul>
<li>Experience with microservices architecture and service mesh technologies.</li>
</ul>
<ul>
<li>Knowledge of security best practices in cloud environments.</li>
</ul>
<ul>
<li>Strong understanding of distributed systems, networking, and database technologies.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and ability to work in a fast-paced environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company that aims to develop and apply general-purpose technologies to align with human values.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$255K – $385K</Salaryrange>
      <Skills>cloud infrastructure, IaC tools, programming/scripting languages, containerization technologies, container orchestration platforms, observability tools, microservices architecture, service mesh technologies, security best practices, distributed systems, networking, database technologies, Kubernetes, Terraform, Datadog, Prometheus, Grafana, Splunk, ELK stack</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company that aims to develop and apply general-purpose technologies to align with human values.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/779b340d-e645-4da1-a923-b3070a26d936</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>